Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
432 views
MySql split table on insert
I am new so please go easy on me :) I have the following table ``` CREATE TABLE `send_sms` ( `sql_id` BIGINT(20) NOT NULL AUTO_INCREMENT, `momt` ENUM('MO','MT') NULL DEFAULT NULL, `sender` VARCHAR(20) NULL DEFAULT NULL, `receiver` VARCHAR(20) NULL DEFAULT NULL, `udhdata` BLOB NULL, `msgdata` TEXT NU...
I am new so please go easy on me :) I have the following table
CREATE TABLE send_sms (
	sql_id BIGINT(20) NOT NULL AUTO_INCREMENT,
	momt ENUM('MO','MT') NULL DEFAULT NULL,
	sender VARCHAR(20) NULL DEFAULT NULL,
	receiver VARCHAR(20) NULL DEFAULT NULL,
	udhdata BLOB NULL,
	msgdata TEXT NULL,
	time BIGINT(20) NULL DEFAULT NULL,
	smsc_id VARCHAR(255) NULL DEFAULT NULL,
	service VARCHAR(255) NULL DEFAULT NULL,
	account VARCHAR(255) NULL DEFAULT NULL,
	id BIGINT(20) NULL DEFAULT NULL,
	sms_type BIGINT(20) NULL DEFAULT NULL,
	mclass BIGINT(20) NULL DEFAULT NULL,
	mwi BIGINT(20) NULL DEFAULT NULL,
	coding BIGINT(20) NULL DEFAULT NULL,
	compress BIGINT(20) NULL DEFAULT NULL,
	validity BIGINT(20) NULL DEFAULT NULL,
	deferred BIGINT(20) NULL DEFAULT NULL,
	dlr_mask BIGINT(20) NULL DEFAULT NULL,
	dlr_url VARCHAR(255) NULL DEFAULT NULL,
	pid BIGINT(20) NULL DEFAULT NULL,
	alt_dcs BIGINT(20) NULL DEFAULT NULL,
	rpi BIGINT(20) NULL DEFAULT NULL,
	charset VARCHAR(255) NULL DEFAULT NULL,
	boxc_id VARCHAR(255) NULL DEFAULT NULL,
	timestamp TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
	binfo VARCHAR(255) NULL DEFAULT NULL,
	meta_data TEXT NULL,
	priority BIGINT(20) NULL DEFAULT NULL,
	foreign_id VARCHAR(255) NULL DEFAULT NULL,
	PRIMARY KEY (sql_id)
)
COLLATE='latin1_swedish_ci'
ENGINE=InnoDB
;
I have two different applications talking to each other via these tables. I would like to count the inserted rows (only 3 in below example, but could be 1000s at a time) and separate then into 3 other existing tables (same format) So for Example:
INSERT INTO send_sms
( momt, sender, receiver, msgdata, sms_type, dlr_mask, dlr_url ) 
VALUES ( 'MT','1234', '447XXXXXXXX', 'Hello world4', 2, 27, 'test1' ),
( 'MT','Sender', '447XXXXXXXY', 'Hello world4', 2, 27, 'test2' ),
( 'MT','Sender', '447XXXXXXXY', 'Hello world4', 2, 27, 'test3' );
send_sms1 (would have test1) send_sms2 (would have test2) send_sms3 (would have test3) Would need to be able to support if its not multiples of 3 example if its 10,000 rows send_sms1, send_sms2 should get 3,333 and send_sms3 should get 3,334 (doesn't matter which order). As this is a live system the table needs to be accessed at the same time (so during the move the table must be writable for other insert commands I have tried things like like
INSERT INTO send_sms2(SELECT * FROM send_sms WHERE sql_id >= (SELECT COUNT(*) FROM send_sms_dump)/2);
- the above was just a test to split data- this worked however there isn't any delete so the data wasn't moved. Just 50K copied from the table to the other table Please point me in the right direction so I can do some more research :) Update thank you Rick James for the solution. I am now trying to create an update trigger
DELIMITER $$

CREATE TRIGGER my_trigger AFTER INSERT ON send_sms_dump
FOR EACH ROW
BEGIN
    -- Statement one
    INSERT INTO send_sms_dump2
    SELECT * FROM send_sms_dump
             WHERE sql_id % 3 = 0;
    -- Statement two
    INSERT INTO send_sms_dump3
    SELECT * FROM send_sms_dump
             WHERE sql_id % 3 = 1;
    -- More UPDATE statements
	INSERT INTO send_sms_dump4
    SELECT * FROM send_sms_dump
             WHERE sql_id % 3 = 2;
    -- More UPDATE statements
	DELETE FROM send_sms_dump;
END$$
I have the above the error is ... /* SQL Error (1442): Can't update table 'send_sms_dump' in stored function/trigger because it is already used by statement which invoked this stored function/trigger. */ ... I am guessing its because of the delete command, i didn't want to drop the table as the third party application will be writing again to that table - let me know if i should post a new question :) thanks guys
mcgster (11 rep)
May 5, 2019, 05:35 PM • Last activity: Apr 24, 2025, 12:06 AM
1 votes
2 answers
1102 views
Split read/write requests over different read/write datasources
Recently I have run some performance tests on the application I work on, and it turns out that it didn't do really well ( the problem is mainly between the back and the DB). So as investigating the problem\solution, we have found out that using read/write datasources ( read/write master 1 or multipl...
Recently I have run some performance tests on the application I work on, and it turns out that it didn't do really well ( the problem is mainly between the back and the DB). So as investigating the problem\solution, we have found out that using read/write datasources ( read/write master 1 or multiple reads slaves) could be a good way to go. As I found in those sources: http://fedulov.website/2015/10/14/dynamic-datasource-routing-with-spring/ To sum up the solution consists of defining the datasources, and before each transaction ( @transaction ) define which datasource should we use. But with already having a huge number of defined services and transactions ( my case) it seems too much time consuming to choose at every step which datasource to use. Is there an automated way to split (select vs /post/update ) opreations ? or a project that serves as a proxy to route the queries. ( I have seen this question that was asked 9 years ago but I think certainly there are new solutions How to setup Hibernate to read/write to different datasources?). Also I have read about latency problems between writing and reading, ( are we talking about ms, s latency ?) does the number of read instances influence the latency? what to do to prevent such behavior before staring to code. ( an architecture to adopt maybe ? a design pattern? ) Ps: I am using spring, spring data jpa, hibernate, postgresql, hikari connection pool. Thank your for time.
ch.E (111 rep)
Aug 16, 2019, 09:48 AM • Last activity: Apr 21, 2025, 10:03 AM
0 votes
1 answers
502 views
SQL - Spilt timestamp into multiple rows
I am dealing with data that needs to be looked at on a shift-to-shift basis (8:00:00 to 20:00:00 and its reciprocal are the two shifts) There are instances where a timestamp (one row) will span longer than a shift. Below is an example of what I am looking for. ``` -----------------------------------...
I am dealing with data that needs to be looked at on a shift-to-shift basis (8:00:00 to 20:00:00 and its reciprocal are the two shifts) There are instances where a timestamp (one row) will span longer than a shift. Below is an example of what I am looking for.
----------------------------------------------------------------------------------------------------------
                                           Original Timestamp Data
----------------------------------------------------------------------------------------------------------
   START_TIME             END_TIME
2020-07-16 04:54:50	 2020-07-27 06:36:14

----------------------------------------------------------------------------------------------------------
                                           Updated Timestamp Data
---------------------------------------------------------------------------------------------------------
-
   START_TIME             END_TIME
2020-07-16 04:54:50	 2020-07-16 08:00:00
2020-07-16 08:00:00	 2020-07-16 20:00:00
2020-07-16 20:00:00	 2020-07-17 08:00:00
2020-07-17 08:00:00	 2020-07-17 20:00:00
        .                      .
        .                      .
        .                      .
2020-07-26 20:00:00  2020-07-27 06:36:14
Here is the code I have tried but I am only able to split the data into two rows. SOmething tells me that the "Start Roll" and "End Roll" Columns within #T1 are not going to work in a situation like this.
Declare @DayTurn as DATETIME, @NightTurn As DATETIME, @TodaysDate As DATETIME, @DateCheck As DATETIME, @TimeChange As Integer, @MidNight As DATETIME

Set @DayTurn = '8:00:00'
Set @NightTurn = '20:00:00'
SET @TodaysDate = GETDATE()
SET @DateCheck = CASE WHEN DATEPART( WK, @TodaysDate) >= 7 THEN DATEADD(yy, DATEDIFF(yy, 0, GETDATE()), 0)
    ELSE DATEADD(Week,-6,DATEADD(yy, DATEDIFF(yy, 0, GETDATE()), 0))
END;

SELECT  	
      (Case 
	    When cast(Activity.[START_TIME_UTC] as time) >= cast(@DayTurn as time) and cast(Activity.[END_TIME_UTC] as time) > cast(@NightTurn as time) and cast(Activity.[START_TIME_UTC]) as time)  cast(@DayTurn as time) then CONVERT(DATETIME, CONVERT(CHAR(8), Activity.[START_TIME_UTC] , 112) + ' ' + CONVERT(CHAR(8), @DayTurn, 108))
		else CONVERT(datetime, Activity.[START_TIME_UTC]) end) as 'Start Roll'
      ,(case
		When cast(Activity.[START_TIME_UTC] as time)  cast(@DayTurn as time) then CONVERT(DATETIME, CONVERT(CHAR(8), Activity.[START_TIME_UTC], 112) + ' ' + CONVERT(CHAR(8), @DayTurn, 108))
		else CONVERT(datetime, Activity.[END_TIME_UTC]) end ) As 'END_TIME'
	  ,(Case
		When cast(Activity.[START_TIME_UTC] as time) >= cast(@DayTurn as time) and cast(Activity.[END_TIME_UTC] as time) > cast(@NightTurn as time) and cast(Activity.[START_TIME_UTC] as time) = @DateCheck


SELECT * INTO #T2 from(
Select 
	temp.[START_TIME]
	,temp.[END_TIME]
From #T1 as temp
UNION
Select
	temp.[Start Roll]
	,temp.[End Roll]
From #T1 as temp
) as temp;


SELECT 
  *
FROM #T2 
Order By START_TIME;

Drop Table #T1
Drop Table #T2
Any and all help is greatly appreciated. Cheers!
BigMac (1 rep)
Jul 28, 2020, 01:46 PM • Last activity: Apr 12, 2025, 08:03 AM
0 votes
2 answers
1709 views
Split Brain scenario Always on Availability group
Due to maintenance our primary site will be offline for few hours. I have an availability group where there are 2 nodes at the primary site and one node at the DR Site. Since the Primary site will be off so I will need to failover to the DR site before the maintenance. My question is if I failover t...
Due to maintenance our primary site will be offline for few hours. I have an availability group where there are 2 nodes at the primary site and one node at the DR Site. Since the Primary site will be off so I will need to failover to the DR site before the maintenance. My question is if I failover to the DR site and shut off all the nodes at the Primary site one after another and once the maintenance is over I simply turn back on the nodes at the primary server one by one and then failover to the primary site will that create a split brain scenario. Since the primary site will be down will the Windows server failover cluster go down and the availability group becomes inaccessible?
SQL_NoExpert (1117 rep)
Apr 4, 2022, 12:12 PM • Last activity: Feb 7, 2025, 06:04 PM
1 votes
1 answers
678 views
SSIS 2016 conditional split with multiple columns in conditions
I am having an issue in executing a conditional split for upserting records into my production table, using SSIS 2016 and MSSQL 2016 (Standard Ed.) I am trying to load two separate files (produced from an OpenVMS database) that contain similarly-formatted content, however they are from two different...
I am having an issue in executing a conditional split for upserting records into my production table, using SSIS 2016 and MSSQL 2016 (Standard Ed.) I am trying to load two separate files (produced from an OpenVMS database) that contain similarly-formatted content, however they are from two different companies: AB_CustomerData.txt and CD_CustomerData.txt. Customer Format files RecordType: CU01 ---------------- RecordType 2 characters Company 2 characters CustomerNumber 7 characters CustomerName 50 characters RecordType: CU02 ---------------- RecordType 2 characters Company 2 characters CustomerNumber 7 characters City 9 characters State 8 characters RecordType: CU03 ---------------- RecordType 2 characters Company 2 characters CustomerNumber 7 characters Phone 10 characters AB_CustomerData.txt ------------------- CU01AB0001234ABC Company CU02AB0001234SmalltownAnywhere CU03AB00012342135551212 CU01AB0002345Unbrella Corp CU02AB0002345SmalltownAnywhere CU03AB00023452135551213 CU01AB0003456MegaCorp CU02AB0003456SmalltownAnywhere CU03AB00034562135551214 CD_CustomerData.txt ------------------- CU01CD0001234Jake's Widgets CU02CD0001234SmalltownAnywhere CU03CD00012342134441313 CU01CD0005678Jane's Doohickies CU02CD0005678SmalltownAnywhere CU03CD00056782135551314 CU01CD0006789Frank's Thingamabobs CU02CD0006789SmalltownAnywhere CU03CD00067892135551315 My end result is to have this in my production table: | Company | CustomerNumber | CustomerName | City | State | Phone| |-|-|-|-|-|-| | AB | 0001234 | ABC Company | Smalltown | Anywhere | 2135551212| | AB | 0002345 | Umbrella Corp | Smalltown | Anywhere | 2135551213| | AB | 0003456 | MegaCorp | Smalltown | Anywhere | 2135551214| | CD | 0001234 | Jake's Widgets | Smalltown | Anywhere | 2135551313| | CD | 0005678 | Jane's Doohickies | Smalltown | Anywhere | 2135551314| | CD | 0006789 | Frank's Thingamabobs | Smalltown | Anywhere | 2135551315| I have a ForEach container to loop through these files in my directory, and do the following: - load the file into a pre-staging table - process the customer record type (CU01, CU02, CU03 for each customer) into record-type specific staging tables (ie: record-type CU01 goes to a CU01 staging table, etc) - merge the record types into one larger staging table, containing all records - merge join the staging table and the production table, to prepare for upserting - upsert the production table My conditional splits are defined as follows:
INSERT: (ISNULL(Production_CustomerNumber) && 
!ISNULL(Staging_CustomerNumber)) && (ISNULL(Production_Company) && 
!ISNULL(Staging_Company))
UPDATE: (!ISNULL(Production_CustomerNumber) && 
!ISNULL(Staging_CustomerNumber)) && (!ISNULL(Production_Company) && 
!ISNULL(Staging_Company))
DELETE: (!ISNULL(Production_CustomerNumber) && 
ISNULL(Staging_CustomerNumber)) && (!ISNULL(Production_Company) && 
ISNULL(Staging_Company))
On the first pass of the ForEach container, the data from the first company file loads correctly all the way through to production. However, on the second pass of the ForEach container, any data pre-existing in the production table gets deleted. I am almost positive it is because of my conditional split definitions, but I can't seem to figure out where.
Kulstad (95 rep)
Jul 3, 2019, 03:17 AM • Last activity: Jan 24, 2025, 01:03 AM
0 votes
1 answers
86 views
How can I disable the MaxScale readwriterouter for specific queries?
I'm using a Galera replication and MaxScale readwriterouter. I'm facing an issue because the application has been developed with this flow: 1. start transaction 2. update a record 3. commit 4. read that record The result is the the record is updated using the write-server and the next read is done o...
I'm using a Galera replication and MaxScale readwriterouter. I'm facing an issue because the application has been developed with this flow: 1. start transaction 2. update a record 3. commit 4. read that record The result is the the record is updated using the write-server and the next read is done on read-server. It doesn't get the data that has been just updated due to replication delay. Unfortunately it is a bit diffult to re-factor the whole application and I'm looking if exists some solution to force a read in the write-server so I can be sure to get the data that was just updated.
Tobia (211 rep)
Oct 22, 2024, 02:26 PM • Last activity: Oct 23, 2024, 03:14 AM
0 votes
1 answers
138 views
best practice for splitting a running db?
i have been tasked with configuring a new MySQL server and splitting half our databases to it. i want to do this well and correctly. challenge is we can't bring the system down as our clients are constantly writing machine data to it. so i'm wondering, as i've never done this on a live system before...
i have been tasked with configuring a new MySQL server and splitting half our databases to it. i want to do this well and correctly. challenge is we can't bring the system down as our clients are constantly writing machine data to it. so i'm wondering, as i've never done this on a live system before, how i would best split it to the new server. i'm only migrating half the databases, and so far, i've just exported them for import as a starting point. but the databases are still being written to with new data AND the imports are taking forever: 4-10hrs each. to try and speed things up, i've disabled autocommit and foreign key checks for each database, but i'm not seeing much time savings so far. my concern is for the data that's being written behind the export; if that makes any sense. how do i preserve data as i split this thing up? as you can likely tell, this is new to me. i know SQL, but not on this scale. and so the task falls to me because in the kingdom of the blind, the one-eyed man is king. :P EDIT: i should note, these servers are running WinServer 2022 and MySQL Server 8.0.34
WhiteRau (103 rep)
Jul 25, 2023, 01:55 PM • Last activity: Jul 28, 2023, 02:23 AM
0 votes
0 answers
66 views
Move data from existing column to new columns in Sequel pro
I need help with my SQL table, so thank you in advance for any comments. My table in Sequel Pro (example): | Object | Time | Intensity | |------|---------|-|---------| |. A. |. 1. |. 100. | |. A. |. 2. |. 150. | |. A. |. 3. |. 300. | |. B. |. 1. |. 150. | |. B. |. 2. |. 300. | |. C. |. 1. |. 80. | |...
I need help with my SQL table, so thank you in advance for any comments. My table in Sequel Pro (example): | Object | Time | Intensity | |------|---------|-|---------| |. A. |. 1. |. 100. | |. A. |. 2. |. 150. | |. A. |. 3. |. 300. | |. B. |. 1. |. 150. | |. B. |. 2. |. 300. | |. C. |. 1. |. 80. | |. C. |. 2. |. 100. | |. C. |. 3. |. 140. | |. C. |. 4. |. 200. | And I would like it to look like this: | Time | A | B | C | |------|---------|---------|---------| |. 1. |. 100. |. 150. |. 80. | |. 2. |. 150. |. 300. |. 100. | |. 3. |. 300. |. NULL |. 140. | |. 4. |. NULL |. NULL |. 200. | Thanks!
Serffest (1 rep)
Feb 10, 2023, 07:00 PM • Last activity: Feb 10, 2023, 09:22 PM
1 votes
1 answers
2512 views
Can I split text from one column into to two existing columns in My SQL
``` SELECT `Appointment`, SUBSTRING_INDEX(SUBSTRING_INDEX(`Appointment`, ' ', 1), ' ', -1) AS Date, SUBSTRING_INDEX(SUBSTRING_INDEX(`Appointment`, ' ', 2), ' ', -1) AS Time FROM go_applicants ``` which separates the date and time but creates 2 new columns, is it possible to insert/update this split...
SELECT Appointment,
   SUBSTRING_INDEX(SUBSTRING_INDEX(Appointment, ' ', 1), ' ', -1) AS Date,
   SUBSTRING_INDEX(SUBSTRING_INDEX(Appointment, ' ', 2), ' ', -1) AS Time
   FROM go_applicants
which separates the date and time but creates 2 new columns, is it possible to insert/update this split into existing columns in the table. This would allow me then to run this as an event instead and continually update the table.
Dan Hardy (13 rep)
Aug 16, 2021, 03:50 PM • Last activity: Aug 16, 2021, 04:25 PM
1 votes
1 answers
489 views
Mysql (win) Is it possible to get databases from multiple drives in the same time?
My question would be, is there any way to read the mysql data folder from different drives in the same time (Windows) ? For example: I have 1TB database from the drive C:, and another 1TB from the drive D:. So when I start mysql it will simply see both databases from both drives.
My question would be, is there any way to read the mysql data folder from different drives in the same time (Windows) ? For example: I have 1TB database from the drive C:, and another 1TB from the drive D:. So when I start mysql it will simply see both databases from both drives.
carouselcarousel (13 rep)
May 2, 2021, 07:06 PM • Last activity: May 2, 2021, 08:49 PM
2 votes
0 answers
1386 views
Export and split QUERY results in multiple files
I would like to know if it's possible to export and split query results in multiple files. Let's assume I have a query returning 100 results and I want to save the results in 5 different csv files (20 results per file). Is there any way to do it? Regards,
I would like to know if it's possible to export and split query results in multiple files. Let's assume I have a query returning 100 results and I want to save the results in 5 different csv files (20 results per file). Is there any way to do it? Regards,
user110366 (41 rep)
Jul 13, 2020, 01:59 PM • Last activity: Jul 13, 2020, 03:16 PM
0 votes
1 answers
4198 views
Split date range (timestamp) into equal parts by Month- SQL Server
Experts, Have a question regarding splitting a date range into equal parts by months including the time part example - fromdate - 06/29/2020 09:00:00 and todate - 06/29/2021 09:00:00 Want to split this date range into twelve equal parts like below 06/29/2020 09:00:00 - 06/30/2020 12:59:59 07/01/2020...
Experts, Have a question regarding splitting a date range into equal parts by months including the time part example - fromdate - 06/29/2020 09:00:00 and todate - 06/29/2021 09:00:00 Want to split this date range into twelve equal parts like below 06/29/2020 09:00:00 - 06/30/2020 12:59:59 07/01/2020 00:00:00 - 07/31/2020 12:59:59 ......... ....... 06/01/2021 00:00:00 - 06/29/2021 09:00:00 I cant write recursive CTE as this is a sql synapse module I am running against.. With below query I am able to split the date part, but time part is not coming properly as above.. Please help me as this is a blocking my development declare @FromTs DATETIME declare @ToTs DATETIME SET @FromTs = GetDate() SET @ToTs = DATEADD(month, 12, @FromTs) ;WITH n(n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY [object_id])-1 FROM sys.all_columns ), d(qi,qrt,qtt,n,f,t,md,bp,ep,rn) AS ( SELECT ,n.n, @FromTs, @ToTs, DATEDIFF(MONTH, @FromTs, @ToTs), DATEADD(MONTH, n.n, DATEADD(DAY, 1-DAY( @FromTs), @FromTs)), DATEADD(DAY, -1, DATEADD(MONTH, 1, DATEADD(MONTH, n.n, DATEADD(DAY, 1-DAY( @FromTs), @FromTs)))) FROM n INNER JOIN AS d ON @ToTs >= DATEADD(MONTH, n.n-1, @FromTs) ) SELECT qi,qrt,qtt, new_from_date = CASE n WHEN 0 THEN f ELSE bp END, new_to_date = CASE n WHEN md THEN t ELSE ep END,rn FROM d WHERE md >= n
Tech_Enthu (3 rep)
Jun 29, 2020, 09:57 PM • Last activity: Jun 30, 2020, 05:15 AM
1 votes
2 answers
4924 views
SQL Split Row Data Separated by Spaces
I am looking for a query to find nth value in a list. The separator is anything greater than or equal to 2 spaces. (it can be 3, or 5 spaces). Trying to avoid scalar value functions, since performance may be slower. The sentences can have any number of words, from 5-20. CREATE TABLE dbo.TestWrite (T...
I am looking for a query to find nth value in a list. The separator is anything greater than or equal to 2 spaces. (it can be 3, or 5 spaces). Trying to avoid scalar value functions, since performance may be slower. The sentences can have any number of words, from 5-20. CREATE TABLE dbo.TestWrite (TestWriteId int primary key identity(1,1), TextRow varchar(255)) INSERT INTO dbo.TestWrite (TextRow) SELECT 'I am writing SQL Code.' UNION ALL SELECT 'SQL keywords include join, except, where.' +-----+----------+---------+---------------+---------+----------+ | SQL | keywords | include | join, | except, | where. | +-----+----------+---------+---------------+---------+----------+ | I | am | writing | SQL Code. | | | +-----+----------+---------+---------------+---------+----------+ Would like in individual rows with columns, see comments above. This may be one solution trying to utilize. https://stackoverflow.com/questions/19449492/using-t-sql-return-nth-delimited-element-from-a-string DECLARE @dlmt NVARCHAR(10)=N' '; DECLARE @pos INT = 2; SELECT CAST(N'' + REPLACE(@input,@dlmt,N'') + N'' AS XML).value('/x[sql:variable("@pos")]','nvarchar(max)')
user172334
Feb 28, 2019, 11:28 PM • Last activity: Apr 13, 2020, 05:54 AM
0 votes
0 answers
33 views
Count Split across Shifts
I have a table which registers orders with a PlanStartTime and PlanEndTime. The customer has three shifts in a day starting from 7 AM - 3 PM, 3 PM - 11 PM, 11 PM - 7 AM (next day) Say an order is planned from 17-Mar-20 8 AM to 17-Mar-20 9 AM and counts of items to be produced is 10, then my first sh...
I have a table which registers orders with a PlanStartTime and PlanEndTime. The customer has three shifts in a day starting from 7 AM - 3 PM, 3 PM - 11 PM, 11 PM - 7 AM (next day) Say an order is planned from 17-Mar-20 8 AM to 17-Mar-20 9 AM and counts of items to be produced is 10, then my first shift count for the work order is 10
E.g

PO#		StartDate			EndDate				Count
--------------------------------------------------------
A1		17-Mar-20 8:00		17-Mar-20 9:00		10
--------------------------------------------------------

Expected Results
--------------------------------------------------
Date			Shift		Items to be produced
--------------------------------------------------
17-Mar-20		1			10

My Current Problem Statement is in case the work order is to execute across days how can i have the counts distributed across days per shift
Problem Statement

PO#		StartDate			EndDate			Count
--------------------------------------------------------
A1		17-Mar-20 8:00		19-Mar-20 9:00		490
--------------------------------------------------------

Expected Results
--------------------------------------------------
Date			Shift		Items to be produced
--------------------------------------------------
17-Mar-20		1			70
17-Mar-20		2			80
17-Mar-20		3			80
18-Mar-20		1			80
18-Mar-20		2			80
18-Mar-20		3			80
19-Mar-20 		1			20

Note Shift 1 stands for 7 AM to 3 PM Shift 2 stands for 3 PM to 11 PM Shift 3 stands for 11PM to 7 AM (next day) The count for the work order needs to be equally distributed across different shifts. The start date to end date duration may vary from 1 hr to max 7 days.
Kaushik (11 rep)
Mar 17, 2020, 06:20 PM
1 votes
1 answers
618 views
Parsing URL Links
I have a large data set of over 10k+ rows and I'm trying to parse the url link that people of clicked on here is a table: dbo.email_list UserID Cliked_Linked 101012 https:// amz/profile_center?qp= 8eb6cbf33cfaf2bf0f51 052469 htpps:// lago/center=age_gap=email_address=caipaingn4535=English_USA 046894...
I have a large data set of over 10k+ rows and I'm trying to parse the url link that people of clicked on here is a table: dbo.email_list UserID Cliked_Linked 101012 https:// amz/profile_center?qp= 8eb6cbf33cfaf2bf0f51 052469 htpps:// lago/center=age_gap=email_address=caipaingn4535=English_USA 046894 https://itune/fr/unsub_email&utm=packing_345=campaign_6458_linkname=ghostrider So I tried this code: UPDATE email_list set Clicked_Link= REVERSE(SUBSTRING(REVERSE(Cliked_Link),,CHARINDEX('.', REVERSE(ColumnName)) + 1, 999)) Unfortunately this didn't work. The goal is to have the link split where the '=' sign is and and have anything that is between the equal sign be in its own column This is the result I hope to have UserID COL_1 COL_2 COL_3 COL_4 101012 https:// amz/profile_center?qp 8eb6cbf33cfaf2bf0f51 NaN 052469 htpps:// lago/center email_addres caipaingn4535 English_USA 046894 https://itune/fr/unsub_email&utm packing_345 campaign_6458_linknam ghostrider
learning (49 rep)
Sep 27, 2019, 02:57 PM • Last activity: Sep 27, 2019, 03:29 PM
1 votes
1 answers
103 views
msaccess split db feature linked tables
We have an Access db that has around 300 linked tables (to SQL Server tables). To switch environments, we run some VBA code that re-links all these tables to the relevant server. But this seems to be very slow. Approximately 0.25 seconds for each table, so the whole process can take almost 2 minutes...
We have an Access db that has around 300 linked tables (to SQL Server tables). To switch environments, we run some VBA code that re-links all these tables to the relevant server. But this seems to be very slow. Approximately 0.25 seconds for each table, so the whole process can take almost 2 minutes. I thought that I could use the Split Database feature to create a backend db that has linked tables that link to the dev environment, and another backend db that has linked tables to the live environement. Then the process of changing environments would be to programmatically tell the front-end db which back-end db to use. However, when I tried the Split Database wizard, the back-end db it created has no linked tables, only the local tables. Has anyone got some suggestion as to achieve that I'm after?
Steve (21 rep)
Jul 22, 2019, 01:32 PM • Last activity: Jul 23, 2019, 03:28 PM
2 votes
2 answers
2809 views
How to use "regular expression" to separate specific strings in Oracle
I have a string `'(1:30,2:4,52:0,8:1)'`, and I need to use a regular expression to have this output: field1 field2 level 1 30 1 2 4 2 52 0 3 8 1 4 The query I've wrote so far is: select distinct trim(regexp_substr('1:30,2:4,52:0,8:1','[^:,]+',1,level)) repfield,level lvl from dual connect by regexp_...
I have a string '(1:30,2:4,52:0,8:1)', and I need to use a regular expression to have this output: field1 field2 level 1 30 1 2 4 2 52 0 3 8 1 4 The query I've wrote so far is: select distinct trim(regexp_substr('1:30,2:4,52:0,8:1','[^:,]+',1,level)) repfield,level lvl from dual connect by regexp_substr('1:30,2:4,52:0,8:1', '[^:,]+', 1, level) is not null order by lvl
Pantea (1510 rep)
Jul 14, 2019, 05:32 AM • Last activity: Jul 14, 2019, 08:47 PM
0 votes
1 answers
261 views
Split Backup of MSSQL VS Stripe Backup of Sybase ASE
I recently came to know about concept of split backup in MSSQL Server whereas I have been working on stripe backup in Sybase ASE. Stripe backup in Sybase ASE is faster than normal backup and if we use three stripes then Sybase will use 3 cores of CPU(Depends on number of cores) as it will be split i...
I recently came to know about concept of split backup in MSSQL Server whereas I have been working on stripe backup in Sybase ASE. Stripe backup in Sybase ASE is faster than normal backup and if we use three stripes then Sybase will use 3 cores of CPU(Depends on number of cores) as it will be split into that many threads, are performed in parallel irrespective of storage location. I wanted to understand if same is the case with MSSQL split backup, I read that if we use different locations for split backup then the backup will be performed in parallel and will be faster: Split Backup however if storage location is same( In the same drive and folder), no extra IO involved and hence not much impact on performance.
Learning_DBAdmin (3924 rep)
Mar 11, 2019, 12:16 PM • Last activity: Mar 11, 2019, 01:18 PM
4 votes
4 answers
3717 views
SQL server database backup - Destination Disk - Adding multiple files - does it duplicate or split backup into the files?
When we do full data backup (using SSMS UI), at the bottom of the window we have the option to specify the destination as Disk and also to add multiple files. My question is - does adding multiple files create duplicate copies of the full backup? or does it create a split backup - that is split the...
When we do full data backup (using SSMS UI), at the bottom of the window we have the option to specify the destination as Disk and also to add multiple files. My question is - does adding multiple files create duplicate copies of the full backup? or does it create a split backup - that is split the full backup into the specified files? This book suggests it does a duplication where as this link suggests that it does a split. Please can someone clarify.
variable (3590 rep)
Oct 12, 2018, 08:22 AM • Last activity: Oct 12, 2018, 10:41 AM
0 votes
2 answers
5317 views
string_split into multiple columns
Sorry if this question is very basic, but I just can't figure it out, and couldn't find any good answers. I have a very long list of files, folders and Sizes. I need to Seperate the folders to columns, that I have a column for each folder. I have a FilePath as a String (eg folder1\folder2\folder3)....
Sorry if this question is very basic, but I just can't figure it out, and couldn't find any good answers. I have a very long list of files, folders and Sizes. I need to Seperate the folders to columns, that I have a column for each folder. I have a FilePath as a String (eg folder1\folder2\folder3). I want to seperate this into multiple Columns: First | Second | Third | Fourth | ... folder1 | folder2 | folder3 | NULL | ... Foldera | folde rb | folderc | folderd | using cross apply string_split(PATH,'\')as folder I get Folder1. using row_Number() I can define wich folder I have in my column, but it is always only 1 column. Actual Example: select [Parentpath], [Size], [spl].[value] from Files cross apply string_split([ParentPath], '\') as [spl] Parentpath || Size || Value Business\Packets\Data\Archive || 29334 || Business
Gerrit (15 rep)
Oct 10, 2018, 08:13 AM • Last activity: Oct 10, 2018, 12:27 PM
Showing page 1 of 20 total questions