Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
2 answers
2923 views
Mongo: how to covert a string to decimal?
I have written millions of documents with $CURRENT_CASH_BALANCE as strings instead of decimals. How can I convert them all to decimals?
I have written millions of documents with $CURRENT_CASH_BALANCE as strings instead of decimals. How can I convert them all to decimals?
Ryan Scott (11 rep)
Apr 20, 2020, 06:46 PM • Last activity: Jun 2, 2025, 12:02 AM
0 votes
1 answers
1455 views
MySQL - Get actual precision and scale of number
I would like to query the integer part and fractional part of all decimal(8,4) values in a column. By that I mean, I want to know the actual given values, and not the max allowed by the data type, which are integer = 4, fractional = 4. For example, 33.99 would return (2,2) and 1.375 would return (1,...
I would like to query the integer part and fractional part of all decimal(8,4) values in a column. By that I mean, I want to know the actual given values, and not the max allowed by the data type, which are integer = 4, fractional = 4. For example, 33.99 would return (2,2) and 1.375 would return (1,3). I have started trying to parse the numbers as strings using char_length() like this: SELECT max(char_length(SUBSTRING_INDEX(col,'.',1))), max(char_length(SUBSTRING_INDEX(col,'.',-1))); A similar, more intuitive, and uglier way in Oracle is SELECT max(length(regexp_substr(col,'^.*[.]',1))), max(length(regexp_substr(col,'[.].*$',1))) But is there a better way that exploits MySQL's knowledge that these are **actually numbers**? This page implies that the true lengths of numbers - e.g. without leading 0s - is known and retrievable, so I was hoping there was a function that would just do this. Does this have something to do with the assertion that "MySQL returns all data as strings and expects you to convert it yourself" or is that specific to that Python module?
WAF (329 rep)
May 14, 2015, 12:28 PM • Last activity: Apr 23, 2025, 05:03 AM
0 votes
2 answers
175 views
What type should I use to store values like "999999.99" in Cassandra?
I wanna add a price column to my table to be able to store values from `0.0` to `999999.99`. I tried `price DECIMAL(8,2)` but it seems it doesn't work. How can I store such a value? I am also wondering to know if there would be any pre-post processing needed after that because I use NodeJS/TypeScrip...
I wanna add a price column to my table to be able to store values from 0.0 to 999999.99. I tried price DECIMAL(8,2) but it seems it doesn't work. How can I store such a value? I am also wondering to know if there would be any pre-post processing needed after that because I use NodeJS/TypeScript in backend that handles all numeric values using number` type! I also like to know if is it a better idea to use string type insted? EDIT: For example if I try CREATE TABLE IF NOT EXISTS products.test( id TEXT PRIMARY KEY, price DECIMAL(8,2)); I get: > SyntaxException: line 3:15 no viable alternative at input '(' (... > TEXT PRIMARY KEY, price [DECIMAL](...) But the following works with no problems: CREATE TABLE IF NOT EXISTS products.test( id TEXT PRIMARY KEY, price INT);
user3486308 (151 rep)
Jun 9, 2023, 10:13 PM • Last activity: Jun 22, 2023, 07:07 AM
2 votes
1 answers
2783 views
Converting varchar to decimal with truncate
I have a column called latitude which is currently varchar(20) I want to convert it to DECIMAL(9,6) However the data stored inside the column is greater than 6 decimal points i.e. 48.123456891123456 the table in question has over 50 billion rows and is a 24/7 database with no downtime and uses parti...
I have a column called latitude which is currently varchar(20) I want to convert it to DECIMAL(9,6) However the data stored inside the column is greater than 6 decimal points i.e. 48.123456891123456 the table in question has over 50 billion rows and is a 24/7 database with no downtime and uses partitioning by month (SQL server 2017 enterprise) How would I achieve the conversion as its too big for 6 decimal points. I was thinking of creating a copy of the column and renaming when converted, however i'm not sure how i would achieve the truncate to 6 decimal points
Quade (321 rep)
Apr 12, 2023, 12:25 PM • Last activity: Apr 13, 2023, 07:22 AM
1 votes
1 answers
1376 views
How to store decimal numbers in MS Access
I tried to store decimal number, but couldn't. It's Office 2019. I tried both number types, and different "format". One format had some zero's show up after a comma, but could not enter decimal number, it was rounded. I was thinking about storing gps coordinates.
I tried to store decimal number, but couldn't. It's Office 2019. I tried both number types, and different "format". One format had some zero's show up after a comma, but could not enter decimal number, it was rounded. I was thinking about storing gps coordinates.
Valter Ekholm (115 rep)
Feb 22, 2023, 08:45 AM • Last activity: Feb 22, 2023, 08:52 AM
0 votes
1 answers
58 views
Changing data type of an existing column of SQL database
One of the field in an existing SQL table is Decimal (5,5) with no foreign key relationships. Developers wants to change it to Decimal (7,5). What will be the downside of making such changes wich has existing data? Is there any chance of the application not working after making such changes?
One of the field in an existing SQL table is Decimal (5,5) with no foreign key relationships. Developers wants to change it to Decimal (7,5). What will be the downside of making such changes wich has existing data? Is there any chance of the application not working after making such changes?
SQL_NoExpert (1117 rep)
Feb 2, 2023, 02:07 AM • Last activity: Feb 7, 2023, 08:50 AM
13 votes
3 answers
67219 views
Automatic decimal rounding issue
The question is relatively simple. I need to calculate 3 columns where the mid results are huge decimals, and I'm running into a problem early on with SQL Server basically rounding the decimals regardless of any cast / converts. For example, let's do a simple division as 1234/1233. A calculator will...
The question is relatively simple. I need to calculate 3 columns where the mid results are huge decimals, and I'm running into a problem early on with SQL Server basically rounding the decimals regardless of any cast / converts. For example, let's do a simple division as 1234/1233. A calculator will produce 1,00081103000811. But when I do this on SQL Server, we get the following: -- Result: rounded at 1.000811000... with trailing zeroes up until the 37 precision SELECT CAST(CAST(1234 AS DEC(38,34))/CAST(1233 AS DEC(38,34)) AS DEC(38,37)) -- Result: rounded at 1.000811 SELECT CONVERT(DECIMAL(38,32), 1234)/CONVERT(DECIMAL(38,32),1233) -- Correct result at 1,00081103000811 -- But this requires the zeroes to be put in manually when you don't -- even know the precision of the end result SELECT 1234.0/1233.00000000000000 Why does this automatic rounding occur? And what's the best way to calculate insanely long decimal values when you can't be sure how big a number (the int or dec part) will be, since the table can contain various different values? Thanks!
Kahn (1803 rep)
May 8, 2013, 08:15 AM • Last activity: Mar 5, 2022, 09:02 AM
0 votes
1 answers
227 views
Calculations involving thresholds between two tables
I have a table `comment` and a table `price` like below. The key columns in both tables are `ticker_id`, `price_datetime` and `price_open`. The last column `threshold` in table `comment` (which currently NULL) is something that I need to fill in with after some calculation queries (if this is achiev...
I have a table comment and a table price like below. The key columns in both tables are ticker_id, price_datetime and price_open. The last column threshold in table comment (which currently NULL) is something that I need to fill in with after some calculation queries (if this is achievable). comment table: +------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+ | comment_id | comment_datetime | author | comment | ticker_id | price_datetime | price_open | threshold | +------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+ | 1 | 2014-09-22 06:05:00 | A1 | C1 | 343 | 2014-09-22 08:00:00 | 53.25000 | | +------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+ | 2 | 2014-09-22 06:39:00 | A2 | C2 | 1 | 2014-09-22 08:00:00 | 62.00000 | | +------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+ | 3 | 2014-09-22 08:13:00 | A3 | C3 | 178 | 2014-09-22 08:13:00 | 5.15000 | | +------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+ price table: +----------+---------------------+------------+-----------+ | price_id | price_datetime | price_open | ticker_id | +----------+---------------------+------------+-----------+ | 1 | 2014-09-22 08:01:00 | 62.00000 | 1 | +----------+---------------------+------------+-----------+ | 2 | 2014-09-22 08:02:00 | 62.00000 | 1 | +----------+---------------------+------------+-----------+ | 3 | 2014-09-22 08:03:00 | 62.00000 | 1 | +----------+---------------------+------------+-----------+ In each row of table comment, price_open will be used as "base price". - For each row of table comment - Match the ticker_id, price_datetime and price_open with table price - Then, get the +-2 days for each price_datetime (follows with the price_open) - Then, count whether any of the price.price_open within that 5 days exceeds 5%, 10% or 15% of the "base price" - Conditions: - If any of the price.price_open within that 5 days equals/exceeds 15% of the "base price", then fill in comment.threshold with "R" - If any of the price.price_open within that 5 days equals/exceeds 10% of the "base price" (but less than 15%), then fill in comment.threshold with "A" - If any of the price.price_open within that 5 days equals/exceeds 5% of the "base price" (but less than 10%), then fill in comment.threshold with "Y" - If any of the price.price_open within that 5 days is less 5%, then fill in comment.threshold with "C" - For empty values in columns comment.price_datetime and comment.price_open, we will leave it NULL as it is, thus NULL for comment.threshold as well. Is the above going to be achievable in MySQL using JOIN? I am trying to learn about JOIN right now, unfortunately it seems way too complicated to me, I have no clue about the query that I should execute as I just started learning MySQL. I've tried explaining my question in details, but if there's anything unclear, kindly let me know. Any help would be much appreciated. Thank you. **EDIT** (as requested by Verace): CREATE statements: CREATE TABLE comment ( comment_id int(11) NOT NULL AUTO_INCREMENT, comment_datetime datetime NOT NULL, author varchar(25) NOT NULL, title varchar(250) NOT NULL, comment text NOT NULL, ticker_id int(11) NOT NULL, price_datetime datetime DEFAULT NULL, price_open decimal(12,5) DEFAULT NULL, threshold varchar(10) DEFAULT NULL, PRIMARY KEY (comment_id) ) CREATE TABLE price ( price_id int(11) NOT NULL AUTO_INCREMENT, price_open decimal(12,5) DEFAULT NULL, ticker_id int(11) NOT NULL, price_datetime datetime NOT NULL, PRIMARY KEY (price_id), UNIQUE KEY datetime (price_datetime,ticker_id) ) Expected result: +------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+ | comment_id | comment_datetime | author | comment | ticker_id | price_datetime | price_open | threshold | +------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+ | 1 | 2014-09-22 06:05:00 | A1 | C1 | 343 | 2014-09-22 08:00:00 | 53.25000 | C | +------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+ | 2 | 2014-09-22 06:39:00 | A2 | C2 | 1 | 2014-09-22 08:00:00 | 62.00000 | Y | +------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+ | 3 | 2014-09-22 08:13:00 | A3 | C3 | 178 | 2014-09-22 08:13:00 | 5.15000 | R | +------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+
merv (153 rep)
Jan 28, 2015, 11:38 PM • Last activity: Oct 13, 2021, 12:03 AM
2 votes
2 answers
3273 views
How to count decimals without zeros
How can I count all decimal place up to the first zero For example I have a column named x with an entry of numeric datatype value 1.955000000 I want to count the decimal places without the zeros. So here I want the query to make an output of 3 not 9 (with zeros)
How can I count all decimal place up to the first zero For example I have a column named x with an entry of numeric datatype value 1.955000000 I want to count the decimal places without the zeros. So here I want the query to make an output of 3 not 9 (with zeros)
DataLordDev (63 rep)
Aug 12, 2019, 08:02 AM • Last activity: Sep 13, 2021, 06:50 PM
0 votes
1 answers
1652 views
Displaying number before decimal points in oracle
I have a logic where I want to calculate and display numbers based upon some operations. The operations are below `V_CALPERCENT nvarhcar(100)`, `v_yearMSA1 nvarchar(100)`, So I want to calculate `V_CALPERCENT` and I have one value for `v_yearMSA1 = 2` SO here goes my calculation, V_CALPERCENT := (v_...
I have a logic where I want to calculate and display numbers based upon some operations. The operations are below V_CALPERCENT nvarhcar(100), v_yearMSA1 nvarchar(100), So I want to calculate V_CALPERCENT and I have one value for v_yearMSA1 = 2 SO here goes my calculation, V_CALPERCENT := (v_yearMSA1 * 2.25) / 100 Its returning me as .05 where as I want it should also display the number before decimal point. How to get it please help
HEEN (203 rep)
Jan 12, 2021, 02:35 PM • Last activity: Jan 13, 2021, 07:44 PM
2 votes
3 answers
12210 views
How to prevent PostgreSQL from automatically rounding numeric types?
I have a simple schema and query for a MWE. Schema: CREATE TABLE ttable ( tcol Numeric(19,4) ) Query: INSERT INTO ttable (tcol) VALUES ('123.45678'); SELECT * FROM ttable; [Fiddle][1] Result: `123.4568` In this case, I have lost precision. I entered five decimal places, but Postgres automatically ro...
I have a simple schema and query for a MWE. Schema: CREATE TABLE ttable ( tcol Numeric(19,4) ) Query: INSERT INTO ttable (tcol) VALUES ('123.45678'); SELECT * FROM ttable; Fiddle Result: 123.4568 In this case, I have lost precision. I entered five decimal places, but Postgres automatically rounded down to four. I want this to be an error, or at least a warning of some kind that I can detect, so that I can tell the user about the loss of precision. How can I make this happen? Edit: This question is obviously not a duplicate of this question . In that question, the user is using a client application which is rounding value below the stored precision. My question is about Postgres itself rounding the data to fit, not the client displaying. This would be obvious to anyone who looked further than the title of both questions.
TechnoSam (139 rep)
Dec 21, 2020, 03:25 PM • Last activity: Dec 22, 2020, 07:41 PM
8 votes
5 answers
16140 views
How does SQL Server determine precision / scale?
I'm running SQL Server 2012 SELECT 0.15 * 30 / 360, 0.15 / 360 * 30 Results: 0.012500, 0.012480 This one is even mor confusing to me: DECLARE @N INT = 360 DECLARE @I DECIMAL(38,26) = 0.15 * 30 / 360 DECLARE @C DECIMAL(38,26) = 1000000 SELECT @C * @I * POWER(1 + @I, @N) / ( POWER(1 + @I, @N) - 1 ) SE...
I'm running SQL Server 2012 SELECT 0.15 * 30 / 360, 0.15 / 360 * 30 Results: 0.012500, 0.012480 This one is even mor confusing to me: DECLARE @N INT = 360 DECLARE @I DECIMAL(38,26) = 0.15 * 30 / 360 DECLARE @C DECIMAL(38,26) = 1000000 SELECT @C * @I * POWER(1 + @I, @N) / ( POWER(1 + @I, @N) - 1 ) SELECT @C * (@I * POWER(1 + @I, @N) / ( POWER(1 + @I, @N) - 1 ) ) The first select gives me the correct result: 12644.44022 The second one truncates the result: 12644.00000
cpacheco (83 rep)
Sep 25, 2014, 06:44 PM • Last activity: Sep 3, 2020, 09:36 AM
3 votes
1 answers
753 views
What's the smartest way to find the scale of an unrestricted numeric type?
By laziness and convenience I integrated quite a lot of data from a partner in an unrestricted *numeric* type in my PostgreSQL database. However it now seems that depending of different shipments from this partner the scale of the *numeric* varies from zero to over 20 decimal places (or scale) which...
By laziness and convenience I integrated quite a lot of data from a partner in an unrestricted *numeric* type in my PostgreSQL database. However it now seems that depending of different shipments from this partner the scale of the *numeric* varies from zero to over 20 decimal places (or scale) which don't make a lot of sense and probably consume quite a lot of storage for no good reason. So I would like to restrict my *numeric* field to a reasonable scale. However because my partner don't provide any recommendations I would like to identify the occurrence of each scale in my data-set to potentially identify a reasonable middle ground between 0 and 20 (zero obviously not being an option and given the 17 Million row-count, aggregated sums ARE going to be impacted by my decision). It will also be needed during following rounding to avoid actually increase the scale of values that were shipped with a scale of zero. ---------- Long story short, what the nicest way to compute that for a specific number stored as an arbitrary *numeric*? Best I could come up to is that, but is there a more elegant way to do that without converting to text?
SELECT
  my_numeric,
  COALESCE(
    char_length(                            -- Finding size of string extracted by a...
      substring(my_numeric::text,'\.(\d*)') -- regexp to return all digits right from '.'
    ),                                      -- but if scale is 0 substring return NULL                               
  0                                         -- so I handled this inside a COALESCE
  ) AS my_numeric_scale
FROM
(VALUES
(0.1::numeric),
(0.12),(0.123),
(0.1234),
(0.12345),
(0.123456),
(0.000001),
(0.100000)
) foo (my_numeric)
MarHoff (253 rep)
Jul 4, 2019, 10:40 AM • Last activity: Jul 4, 2019, 12:34 PM
4 votes
1 answers
181 views
According to the SQL92 spec, can I store the value 1 in a field where the precision = scale?
Is it SQL92 compliant to store the number 1 in a field that is defined as `NUMERIC 3,3`? What about `DECIMAL 3,3`? Does this mean that the precision is a floating decimal place or static that must have 3 decimal places even if they are all zero?
Is it SQL92 compliant to store the number 1 in a field that is defined as NUMERIC 3,3? What about DECIMAL 3,3? Does this mean that the precision is a floating decimal place or static that must have 3 decimal places even if they are all zero?
jcalfee314 (185 rep)
Aug 5, 2012, 02:05 PM • Last activity: Oct 29, 2018, 03:22 PM
5 votes
1 answers
697 views
Question regarding decimal arithmetic
I think my understanding of precision vs scale might be incorrect as the following example produces values that do not make sense to me. `decimal(32, 14)` rounds the result to 6 decimal places, while the `decimal(18, 14)` rounds to 19. My understanding of decimal is `decimal(p, [s])`, where `p` is t...
I think my understanding of precision vs scale might be incorrect as the following example produces values that do not make sense to me. decimal(32, 14) rounds the result to 6 decimal places, while the decimal(18, 14) rounds to 19. My understanding of decimal is decimal(p, [s]), where p is the total number of digits and s is the number of digits after the decimal (i.g., decimal(10, 2) would result in 8 digits to the left of the decimal and 2 digits to the right). Is this not correct? I created a small example that illustrates the seemingly odd behavior: -------------------- -- Truncates at pipe -- 1.043686|655... -------------------- declare @dVal1 decimal(32, 14) = 10 declare @dVal2 decimal(32, 14) = 9.581419815465469 select @dVal1 Val1, @dVal2 Val2, @dVal1 / @dVal2 CalcResult ---------------- -- Most accurate ---------------- declare @dVal3 decimal(18, 14) = 10 declare @dVal4 decimal(18, 14) = 9.581419815465469 select @dVal3 Val3, @dVal4 Val4, @dVal3 / @dVal4 CalcResult So on to the question, what is it that I am missing to understand this? The articles and msdn blogs I have read don't seem to provide clarity (at least to my thought process). Can someone explain to me why a higher precision seems to result in a loss of scale?
Mythikos (53 rep)
Oct 15, 2018, 08:06 PM • Last activity: Oct 15, 2018, 09:06 PM
18 votes
3 answers
154962 views
Convert string numeric values with comma as decimal separator to NUMERIC(10, 2)
I have an SQL table of varchar columns which contain Greek formatted numbers (. as thousand separator and comma as decimal separator) The classic conversion CONVERT(numeric(10,2),REPLACE([value],',','.')) does not work because the . (thousand separator) kills the conversion E.g try CONVERT(numeric(1...
I have an SQL table of varchar columns which contain Greek formatted numbers (. as thousand separator and comma as decimal separator) The classic conversion CONVERT(numeric(10,2),REPLACE([value],',','.')) does not work because the . (thousand separator) kills the conversion E.g try CONVERT(numeric(10,2),REPLACE('7.000,45',',','.')) I want to convert such values to numeric(10,2) Any suggestions of how to handle it?
PanosPlat (521 rep)
Oct 14, 2015, 06:58 PM • Last activity: Jun 20, 2018, 09:30 PM
10 votes
2 answers
2221 views
Problem with union casting integer to ceiling(decimal)
I have this scenario, it looks like MySQL is taking the largest decimal value and tries to cast the other values to that. The problem is that this query is generated by an external library, so I don't have control over this code, at this level at least. Do you have some idea how to fix this? SELECT...
I have this scenario, it looks like MySQL is taking the largest decimal value and tries to cast the other values to that. The problem is that this query is generated by an external library, so I don't have control over this code, at this level at least. Do you have some idea how to fix this? SELECT 20 AS x UNION SELECT null UNION SELECT 2.2; +------+ | x | +------+ | 9.9 | -- why from 20 to 9.9 | NULL | | 2.2 | +------+ Expected Result +------+ | x | +------+ | 20 | -- or 20.0, doesn't matter really in my case | NULL | | 2.2 | +------+ Adding more context, I'm using Entity Framework 6 with an extension library http://entityframework-extensions.net/ to save changes in batches, specifically the method context.BulkSaveChanges();, this library creates queries using "select union".
ngcbassman (103 rep)
Apr 24, 2018, 05:06 AM • Last activity: May 16, 2018, 03:43 PM
5 votes
3 answers
726 views
Storing two integers as a decimal
What are the downsides to storing two integers as a decimal? I am storing asset details in tables, each asset type has its own table (each asset is very different) and using another table to define the asset tables, so each asset table has an integer id and each asset also has an integer id. I have...
What are the downsides to storing two integers as a decimal? I am storing asset details in tables, each asset type has its own table (each asset is very different) and using another table to define the asset tables, so each asset table has an integer id and each asset also has an integer id. I have two different scenarios where this could be handy: 1. have an "audit" table that stores information like: this user did this to that item 2. someone is assigned to work on this asset of this type. I was thinking of storing it like assetType.assetID so asset type 5 and id 99 would be decimal 5.99 I would very rarely need to select based on 5.99, I would just query the record that stores the 5.99, then split it and use a function to go to table 5 record 99. I can't tie the assetID to a specific table; assetType is the id of an entry in a table referencing the asset tables (defines things like table name, primary key column and the like) so it already seems like I wouldn't be able to use foreign key constraints either way. There are a lot of asset tables, like asset_tmv and asset_backflow. An asset is assigned a type by what table it is in, as the data to be stored for each asset varies greatly. I realise I could achieve this using 2 integer fields. What I am wondering is: What would the downsides be?
mike16889 (209 rep)
Apr 27, 2018, 12:17 AM • Last activity: May 1, 2018, 12:29 AM
0 votes
1 answers
34 views
Having values or some state in a database
Suppose I have a table with several price values in it ID Int, Primary Key, not null AveragePrice Decimal, not null MinPrice Decimal, not null MaxPrice Decimal, not null Each price value can be known and present (represented as a positive value) or it can be * infinite (without upper bound) * unknow...
Suppose I have a table with several price values in it ID Int, Primary Key, not null AveragePrice Decimal, not null MinPrice Decimal, not null MaxPrice Decimal, not null Each price value can be known and present (represented as a positive value) or it can be * infinite (without upper bound) * unknown * irrelevant * customer specific There may be more even states. **How to model this in a database?** Currently these states are modelled as negative constants, e. g. a price of -10 means "unknown", -20 means "irrelevant". Since prices are normally positive this kind of works but I hope there exists a more elegant solution.
TobiMcNamobi (103 rep)
Apr 4, 2018, 07:35 AM • Last activity: Apr 4, 2018, 12:12 PM
0 votes
1 answers
628 views
Average Value not working
I have a database of students and grades across varying report cycles. Each Report Cycle has a unique ID and each subject also has a unique ID. Students can have more than one teacher per subject so I need to average the grades given and produce just the averaged grade for their attainment scores. A...
I have a database of students and grades across varying report cycles. Each Report Cycle has a unique ID and each subject also has a unique ID. Students can have more than one teacher per subject so I need to average the grades given and produce just the averaged grade for their attainment scores. An example of the raw data is below: +--------------+------+-----+----+-------------+---+ | 223599152142 | 12 | 92 | 3 | Mathematics | 0 | | 223599152142 | 12 | 92 | 3 | Mathematics | 3 | | 223599152142 | 12 | 92 | 7 | History | 3 | | 223599152142 | 12 | 92 | 12 | Economics | 3 | | 223599152142 | 12 | 92 | 12 | Economics | 4 | | 223599152142 | 12 | 92 | 26 | Latin | 2 | | 223599152142 | 12 | 109 | 3 | Mathematics | 3 | | 223599152142 | 12 | 109 | 3 | Mathematics | 4 | | 223599152142 | 12 | 109 | 7 | History | 3 | | 223599152142 | 12 | 109 | 26 | Latin | 2 | | 223599152142 | 12 | 109 | 26 | Latin | 3 | | 223599152142 | 12 | 110 | 3 | Mathematics | 4 | | 223599152142 | 12 | 110 | 7 | History | 2 | | 223599152142 | 12 | 110 | 7 | History | 3 | | 223599152142 | 12 | 110 | 26 | Latin | 2 | | 223599152142 | 12 | 110 | 26 | Latin | 3 | +--------------+------+-----+----+-------------+---+ I wish to average the GradeTransposeValue based on the SubjectID for each ReportCycleID, but I am getting odd results. I have set up the below query to just focus on one student and one report cycle to show the problem. SELECT intSubjectID, txtCurrentSubjectReportName, CAST(AVG(intGradeTransposeValue) AS decimal (5,2)) AS avg_attainment FROM VwReportsManagementAcademicReports WHERE intReportCycleAcademicYear = 2017 AND intNCYear >6 AND intGradeID = 1 AND txtSchoolID = 223599152142 AND intReportCycleID= 110 GROUP BY intSubjectID, txtCurrentSubjectReportName This is producing this result +--------------+-----------------------------+----------------+ | intSubjectID | txtCurrentSubjectReportName | avg_attainment | +--------------+-----------------------------+----------------+ | 7 | History | 2.00 | | 26 | Latin | 2.00 | | 3 | Mathematics | 4.00 | +--------------+-----------------------------+----------------+ However, if we look at Latin as an example, the original values were 3 and 2, so 3+2 = 5 / 2 should = 2.5 and not 2. Any help on this would be much appreciated.
MIS_Gem (161 rep)
Apr 3, 2018, 02:34 PM • Last activity: Apr 3, 2018, 03:31 PM
Showing page 1 of 20 total questions