Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
290
views
Error when replicating Full Text indexes from SQL Server to an old Azure SQL database
When trying to deliver a snapshot for the transactional replication from SQL Server database to an old Azure SQL database (the database created long before Microsoft introduced Full Text support in Azure SQL) I get the following error: Command attempted: if databasepropertyex(db_name(),'IsFullTextEn...
When trying to deliver a snapshot for the transactional replication from SQL Server database to an old Azure SQL database (the database created long before Microsoft introduced Full Text support in Azure SQL) I get the following error:
Command attempted:
if databasepropertyex(db_name(),'IsFullTextEnabled') = 0 and isnull(is_member('db_owner'),0) = 1
exec sp_fulltext_database 'enable'
(Transaction sequence number: xxxxxx, Command ID: xxxxx)
Error messages:
DBCC command 'CALLFULLTEXT' is not supported in this version of SQL Server. (Source: MSSQLServer, Error number: 40518)
I noticed that for very old Azure SQL databases the **IsFullTextEnabled** flag is set always to **0** despite having the Full Text capabilities working just fine. This is in contrast to newly created Azure SQL databases, which always get mentioned flag set to **1** by default.
Looks like a bug in the replication logic present in SQL Server (I checked 2019 and 2022 versions so far), because that flag is deprecated in Azure SQL and should never be evaluated by new logic, yet SQL Server replication still tries to evaluate it.
Does anyone know how to overcome this issue and make the replication from SQL Server to Azure SQL work together with full text indexes?
For now it looks like the only viable solutions, I can think of, are:
1. Do not replicate full text indexes at all to such old Azure databases.
2. Create a brand new database and replicate to it leaving the old one no longer used.
However both approaches are not perfect and have some disadvantages. Can I somehow change the **IsFullTextEnabled** flag to **1** for such old Azure SQL databases or force the replication logic to not evaluate it at all?
Przemysław Łukawski
(21 rep)
Jan 17, 2023, 01:12 PM
• Last activity: Sep 20, 2025, 07:02 AM
1
votes
0
answers
3
views
Why does GridDB Cloud Python client throw GSException: invalid RowKey when inserting rows with explicit UUID keys?
I’m testing **GridDB Cloud with the Python client** (griddb_python) and want to use UUIDs as row keys instead of integers or strings. **Schema:** ``` import griddb_python as griddb import uuid, datetime factory = griddb.StoreFactory.get_instance() store = factory.get_store( notification_member="..."...
I’m testing **GridDB Cloud with the Python client** (griddb_python) and want to use UUIDs as row keys instead of integers or strings.
**Schema:**
import griddb_python as griddb
import uuid, datetime
factory = griddb.StoreFactory.get_instance()
store = factory.get_store(
notification_member="...",
cluster_name="...",
user="...",
password="..."
)
cols = [
("id", griddb.Type.STRING), # intended for UUID string
("ts", griddb.Type.TIMESTAMP),
("value", griddb.Type.FLOAT)
]
container = store.put_container("uuid_test", cols, True) # true = rowkey
**Insert attempt:**
row = (str(uuid.uuid4()), datetime.datetime.utcnow(), 42.5)
container.put(row)
**Error**:
[GSException] (errorCode=0x7d010001, message=RowKey value is invalid)
If I switch the row key to a plain string like "sensor_1", it works fine.
**Question:**
Are there restrictions on **row key formats in GridDB Cloud** (e.g., length, character set, UUID format)? If UUIDs aren’t allowed directly, what’s the recommended workaround store them as a secondary column with an index, or encode them differently before inserting?
**What I’ve checked:**
- GridDB docs state row keys can be STRING, INTEGER, or TIMESTAMP, but don’t clarify format restrictions on string keys.
- Tried both Python uuid.uuid4() and hex string version (uuid.uuid4().hex) — same error.
- Java client test with the same schema also rejects UUIDs as row keys.
VIK
(41 rep)
Sep 20, 2025, 06:37 AM
0
votes
1
answers
629
views
Access tables from multiple schemas
I have two schemes with multiple tables. The queries in packages heavily reference schema names. Is it possible to remove schema names from the queries and oracle to identify the tables based on roles or privileges?? Query - Select * from schema1.app_table1 ap1 inner join schema2.app_table2 ap2 The...
I have two schemes with multiple tables. The queries in packages heavily reference schema names. Is it possible to remove schema names from the queries and oracle to identify the tables based on roles or privileges??
Query -
Select * from schema1.app_table1 ap1 inner join schema2.app_table2 ap2
The query would be run from oracle user app_user who has access to both the tables - app_table1 and app_table2.
It is possible to run the query without schema prefixes or synonyms just by modifying the user app_user.
Thanks for your help.
Surajkuamr Shinde
(1 rep)
Feb 20, 2019, 12:30 AM
• Last activity: Sep 20, 2025, 06:02 AM
0
votes
1
answers
375
views
How much memory using for adaptive hash indexes in MariaDB?
I'm reading about [`adaptive hash indexes`](https://dev.mysql.com/doc/refman/5.7/en/innodb-adaptive-hash.html) in MariaDB. It is created in memory, so it will be logically to know how much memory usually used for the index table. For example, my local development database has only 2GB of RAM. Is it...
I'm reading about [
adaptive hash indexes
](https://dev.mysql.com/doc/refman/5.7/en/innodb-adaptive-hash.html) in MariaDB. It is created in memory, so it will be logically to know how much memory usually used for the index table.
For example, my local development database has only 2GB of RAM. Is it possible theoretically that adaptive index table will use > 50% of total memory?
I tried to search how much memory is usually used for AHI. But here is no info in the internet about memory and how to limit it. The only info I found is [innodb-adaptive-hash-index-parts
](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_adaptive_hash_index_parts) . However here is no explanation about what means a specific partition
and what is size of it?
rzlvmp
(111 rep)
Jul 26, 2023, 01:04 AM
• Last activity: Sep 20, 2025, 05:08 AM
1
votes
1
answers
288
views
Query performance with primary keys of different data types
Is there a significant difference in performance if i decide to use INT or BIGINT as data type of a primary key in a table? Of course if i use BIGINT the value will have twice it's size. But as the primary key has always an index to speed up queries, is there going to be a significant difference in...
Is there a significant difference in performance if i decide to use INT or BIGINT as data type of a primary key in a table? Of course if i use BIGINT the value will have twice it's size. But as the primary key has always an index to speed up queries, is there going to be a significant difference in performance?
Mateus Viccari
(449 rep)
Feb 6, 2015, 10:31 AM
• Last activity: Sep 20, 2025, 04:06 AM
0
votes
1
answers
433
views
Update two tables, each on separate MySQL servers, on two different machines in one query?
I have two MySQL servers running on two different machines in a local network. I am using PHP and accessing the databases via IP. No problems there. The databases have identical schemas. One of them was an import from the other machine. I want to run a single (ONE) query that updates the same table...
I have two MySQL servers running on two different machines in a local network. I am using PHP and accessing the databases via IP. No problems there. The databases have identical schemas. One of them was an import from the other machine.
I want to run a single (ONE) query that updates the same table on each server simultaneously. I already have existing queries and do not want to update them extensively.
Is there a simple way to achieve this?
I'm looking to keep the databases in sync when an update occurs.
I am also open to any other suggestions regarding other ways to keep the databases in sync.
drforbin
(1 rep)
Oct 5, 2018, 06:38 PM
• Last activity: Sep 20, 2025, 03:02 AM
15
votes
3
answers
9336
views
What is the point of TRY CATCH block when XACT_ABORT is turned ON?
Code sample with XACT_ABORT_ON: SET XACT_ABORT_ON; BEGIN TRY BEGIN TRANSACTION //do multiple lines of sql here COMMIT TRANSACTION END TRY BEGIN CATCH IF (@@TRANCOUNT > 0) ROLLBACK; //may be print/log/throw error END CATCH Since XACT ABORT is ON, any error will automatically rollback the transaction....
Code sample with XACT_ABORT_ON:
SET XACT_ABORT_ON;
BEGIN TRY
BEGIN TRANSACTION
//do multiple lines of sql here
COMMIT TRANSACTION
END TRY
BEGIN CATCH
IF (@@TRANCOUNT > 0) ROLLBACK;
//may be print/log/throw error
END CATCH
Since XACT ABORT is ON, any error will automatically rollback the transaction. So what purpose does the TRY CATCH block serve?
variable
(3590 rep)
Feb 1, 2022, 06:10 AM
• Last activity: Sep 20, 2025, 02:18 AM
0
votes
3
answers
140
views
Extended Events sessions stops collection when edited while the session is on run
I created an Extended Events session - based on Standard SQL template. Just added Global filter on sqlserver.databasename for a particular DB. Started watching live data. Updated the session to edit it and added few more events. And clicked ok. The session was running during the edit happened. Now t...
I created an Extended Events session - based on Standard SQL template.
Just added Global filter on sqlserver.databasename for a particular DB.
Started watching live data.
Updated the session to edit it and added few more events. And clicked ok. The session was running during the edit happened.
Now the Watch live data stopped adding rows.
I did refresh the 'sessions' folder + the created session. But the Watch Live Data didn't resume.
Closed and opened again by right clicking the session name and chose 'Watch Live Data'. Now it shows 'Retrieving event information from server' but never adds up any events.
When checked the Target Data from eventfile. The last time it traced is before the edit happened. Though a day passed, the Target data never gets updated.
Have to create new session.
Does Extended Event session functions so?
Or Am I wrong? Kindly help. Can't keep creating new session for every edit we need.
Thank you!
Adding screenshot from Standard XEvent Profiler which is too not working.
Here is the script of the session - (Changed session name & DB name alone)
CREATE EVENT SESSION [SessionName] ON SERVER
ADD EVENT sqlserver.database_xml_deadlock_report(
ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username)
WHERE ([sqlserver].[database_name]=N'DBName')),
ADD EVENT sqlserver.existing_connection(SET collect_database_name=(1),collect_options_text=(1)
ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username)
WHERE ([sqlserver].[database_name]=N'DBName')),
ADD EVENT sqlserver.lock_deadlock_chain(SET collect_database_name=(1),collect_resource_description=(1)
ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username)),
ADD EVENT sqlserver.rpc_starting(SET collect_data_stream=(1)
ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username)
WHERE ([package0].[equal_boolean]([sqlserver].[is_system],(0)) AND [sqlserver].[database_name]=N'DBName')),
ADD EVENT sqlserver.sp_statement_completed(SET collect_object_name=(1)
ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username)
WHERE ([sqlserver].[database_name]=N'DBName')),
ADD EVENT sqlserver.sql_batch_completed(
ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username)
WHERE ([sqlserver].[database_name]=N'DBName')),
ADD EVENT sqlserver.sql_batch_starting(
ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username)
WHERE ([package0].[equal_boolean]([sqlserver].[is_system],(0)) AND [sqlserver].[database_name]=N'DBName')),
ADD EVENT sqlserver.sql_statement_completed(SET collect_parameterized_plan_handle=(1)
ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username)
WHERE ([sqlserver].[database_name]=N'DBName'))
ADD TARGET package0.event_file(SET filename=N'E:\SessionName.xel')
WITH (MAX_MEMORY=8192 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=5 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=PER_CPU,TRACK_CAUSALITY=ON,STARTUP_STATE=ON)
GO
Event global filter which I mentioned is as below - to apply in all events


Meera K
(81 rep)
Aug 23, 2024, 07:08 AM
• Last activity: Sep 20, 2025, 02:06 AM
0
votes
1
answers
321
views
Postgres - Improving multi-column GIN text search performance
I've got a schema for customer records a bit like: ``` account_id - UUID name - text ``` I currently have a GIN index that looks like: ``` ... USING GIN (account_id, name gin_trgm_ops) ``` We have a mix of accounts. Some have a lot of customers (10m+) and some only have a few (10k+). We have a lot o...
I've got a schema for customer records a bit like:
account_id - UUID
name - text
I currently have a GIN index that looks like:
... USING GIN (account_id, name gin_trgm_ops)
We have a mix of accounts. Some have a lot of customers (10m+) and some only have a few (10k+). We have a lot of queries that look like:
SELECT from customers WHERE account_id= AND
(
name LIKE 'Bob%'
OR name LIKE 'Alice%'
OR name LIKE 'Dave%'
OR name LIKE 'Carol%'
OR name LIKE 'Edward%'
OR name LIKE 'Fay%'
)
For large customers the index performance is good (queries ~2s). For small customers the index performance is comparatively poor (also ~2s). This matters due to how often we run these queries.
Is there a way to improve this for smaller customers? We've noticed that replacing the index with a simple btree index on account_id
is faster - scanning all the records for a single **small** account is faster than doing the index bitmap work across *all* accounts. Obviously, this is a lot slower on the large accounts.
I think partitioning the table is the only way forward here. However, I'm hoping someone has a bright idea :)
ColinHowe
(101 rep)
Jun 29, 2024, 10:56 AM
• Last activity: Sep 20, 2025, 01:02 AM
0
votes
1
answers
339
views
Linked Server not connecting to SQL Server 2022 database
I have a SQL Server 2019 Enterprise database in a data center far, far away. I have a SQL Server 2019 developer edition database on my local laptop (A), and it is a Linked Server on the far away database (B). The security is based on a local SQL login on the A database. Everything works fine. I do d...
I have a SQL Server 2019 Enterprise database in a data center far, far away. I have a SQL Server 2019 developer edition database on my local laptop (A), and it is a Linked Server on the far away database (B). The security is based on a local SQL login on the A database. Everything works fine. I do development work and test the B database where I create JOINs back to my A database.
My IT group just issued a new laptop to me. I installed SQL Server 2022 developer edition and now I have the database (C) configured just like the one on my old laptop A. However, I cannot create a Linked Server connection to the C database from the B database. I have confirmed about 49 times that the SQL login account is exactly the same for both A and C databases. I have deleted and re-created the Linked Server from B to A multiple times today, and it works fine. But when I try to create a Linked Server from B to C, I receive an error message that the connection failed.
I have noticed in SQL Config Mgr that there are no Client Protocols on the C database. Apparently this went away with SQL 2022. I don't think this is the issue; I have made sure the port numbers on C match what is on A, which is 1433. I have restarted the SQL Browser service on C multiple times but this did not help.
I have laptops A and C sitting side by side, and I cannot see any difference in the way the databases are configured. What am I missing?

MarkF
(1 rep)
Jul 28, 2024, 02:16 AM
• Last activity: Sep 20, 2025, 12:06 AM
1
votes
2
answers
257
views
Grabbing SQL Dump of Master DB Without Downtime
I'm curious whether downtime will be necessary to grab a SQL dump of my master database. Right now, I'm in the process of rebuilding my one slave. There is actually only one database from master that is being replicated onto slave. All tables in that database are InnoDB. This is the command I want t...
I'm curious whether downtime will be necessary to grab a SQL dump of my master database.
Right now, I'm in the process of rebuilding my one slave. There is actually only one database from master that is being replicated onto slave. All tables in that database are InnoDB. This is the command I want to run:
mysqldump --master-data --single-transaction --hex-blob dbname | gzip > dbname.sql.gz
I'm running MySQL 5.1 and here is a redacted version of my my.cnf file:
[mysqld]
default-storage-engine=InnoDB
character-set-server=UTF8
lower_case_table_names=1
transaction_isolation=READ-COMMITTED
wait_timeout=86400
interactive_timeout=3600
delayed_insert_timeout=10000
connect_timeout=100000
max_connections=750
max_connect_errors=1000
back_log=50
max_allowed_packet=1G
max_heap_table_size=64M
tmp_table_size=64M
bulk_insert_buffer_size=128M
innodb_buffer_pool_size=10000M
innodb_data_file_path=ibdata1:256M:autoextend
innodb_file_per_table=1
innodb_additional_mem_pool_size=32M
innodb_log_file_size=1G
innodb_log_buffer_size=8M
innodb_flush_method=O_DIRECT
innodb_lock_wait_timeout=240
innodb_flush_log_at_trx_commit=2
innodb_open_files=8192
innodb_support_xa=ON
thread_cache_size=500
expire_logs_days=2
server-id=1
log_bin=1
binlog_format=MIXED
sync_binlog=0
[mysqldump]
max_allowed_packet=128M
Am I good without downtime or not? I'm concerned about a possible read lock being placed on tables.
Jordan Parra
(19 rep)
Jul 14, 2016, 01:12 AM
• Last activity: Sep 19, 2025, 11:09 PM
1
votes
1
answers
56
views
Question on Index Fragmentation and Index Rebuilds
I am currently sitting with an Azure managed SQL-server, with all my table indexes sitting at >99% fragmentation. For some reason this instance was set to use DTU's, it's on a 250 tier. These tables are being constantly written and read from, and nightly data deleted. Is there a way I can rebuild th...
I am currently sitting with an Azure managed SQL-server, with all my table indexes sitting at >99% fragmentation. For some reason this instance was set to use DTU's, it's on a 250 tier. These tables are being constantly written and read from, and nightly data deleted.
Is there a way I can rebuild the indexes while things are running without the risk of any of the other queries timing out?
Or do I need to schedule some down time and kill the connections so that I can rebuild the indexes? Do I perhaps need to scale this DB up for this temporarily to something like 500 DTU's?
I would appreciate any advice on what would be a good or a correct way to handle this.
(Yes there is no DBA, no-one was looking at after this DB, I happen to notice this and now sit with this problem :(... )
Chaos
(21 rep)
Sep 18, 2025, 10:50 PM
• Last activity: Sep 19, 2025, 10:24 PM
1
votes
1
answers
280
views
Run out of disc space on Debian server due to postgresql database
I recently run out of disc space no my server due to implementing some audit tools on Postgresql database. Two tables holding data about changes made by users quickly reached 16GB each, and totally filled 40GB server. I've checked size of this tables by command in PGAdmin4: ```SQL SELECT relname as...
I recently run out of disc space no my server due to implementing some audit tools on Postgresql database. Two tables holding data about changes made by users quickly reached 16GB each, and totally filled 40GB server. I've checked size of this tables by command in PGAdmin4:
SELECT
relname as "Table",
pg_size_pretty(pg_total_relation_size(relid)) As "Size",
pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid)) as "External Size"
FROM pg_catalog.pg_statio_user_tables ORDER BY pg_total_relation_size(relid) DESC;
Following that, I deleted these two tables to save space, but despite that I haven't actually recovered any disc space. I run df and du commands in debian to check disc usage:
root@vps673587:/home/maciejzak# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 40G 36G 2.4G 94% /
udev 10M 0 10M 0% /dev
tmpfs 774M 81M 694M 11% /run
tmpfs 1.9G 8.0K 1.9G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 387M 0 387M 0% /run/user/1003
tmpfs 387M 0 387M 0% /run/user/1001
root@vps673587:/home/maciejzak# sudo du -sh /*
8.9M /bin
72M /boot
8.0K /dev
7.0M /etc
92M /home
0 /initrd.img
0 /initrd.img.old
537M /lib
4.0K /lib64
16K /lost+found
8.0K /media
4.0K /mnt
4.0K /opt
du: cannot access ‘/proc/10127/task/10127/fd/4’: No such file or directory
du: cannot access ‘/proc/10127/task/10127/fdinfo/4’: No such file or directory
du: cannot access ‘/proc/10127/fd/4’: No such file or directory
du: cannot access ‘/proc/10127/fdinfo/4’: No such file or directory
0 /proc
124K /root
81M /run
4.9M /sbin
12K /srv
0 /sys
44K /tmp
968M /usr
3.6G /var
0 /vmlinuz
0 /vmlinuz.old
So as you can see, 94% (36GB) on sda1 is used. Meanwhile only 3,6 GB is actually used on /var. What happened with rest of that disc space? I've run lsof command:
root@vps673587:/home/maciejzak# lsof | grep -i deleted
sudo 2529 root 0u CHR 136,0 0t0 3 /dev/pts/0 (deleted)
sudo 2529 root 1u CHR 136,0 0t0 3 /dev/pts/0 (deleted)
sudo 2529 root 2u CHR 136,0 0t0 3 /dev/pts/0 (deleted)
proftpd 2530 root 0u CHR 136,0 0t0 3 /dev/pts/0 (deleted)
proftpd 2530 root 1u CHR 136,0 0t0 3 /dev/pts/0 (deleted)
proftpd 2530 root 2u CHR 136,0 0t0 3 /dev/pts/0 (deleted)
proftpd 2530 root 10r REG 8,1 5262 29171 /etc/init.d/proftpd (deleted)
systemctl 2540 root 0u CHR 136,0 0t0 3 /dev/pts/0 (deleted)
systemctl 2540 root 1u CHR 136,0 0t0 3 /dev/pts/0 (deleted)
systemctl 2540 root 2u CHR 136,0 0t0 3 /dev/pts/0 (deleted)
systemd-t 2541 root 0u CHR 136,0 0t0 3 /dev/pts/0 (deleted)
systemd-t 2541 root 1u CHR 136,0 0t0 3 /dev/pts/0 (deleted)
systemd-t 2541 root 2u CHR 136,0 0t0 3 /dev/pts/0 (deleted)
postgres 8927 postgres 19u REG 8,1 24576 262826 /var/lib/postgresql/10/main/base/18036/68886_fsm (deleted)
postgres 8927 postgres 58u REG 8,1 0 263332 /var/lib/postgresql/10/main/base/18036/68886 (deleted)
postgres 8927 postgres 59u REG 8,1 8192 263850 /var/lib/postgresql/10/main/base/18036/68886_vm (deleted)
postgres 8927 postgres 60u REG 8,1 0 263783 /var/lib/postgresql/10/main/base/18036/68894 (deleted)
postgres 8927 postgres 61u REG 8,1 0 263782 /var/lib/postgresql/10/main/base/18036/68893 (deleted)
postgres 8927 postgres 62u REG 8,1 4251648 263049 /var/lib/postgresql/10/main/base/18036/68889_fsm (deleted)
postgres 8927 postgres 64u REG 8,1 4251648 263320 /var/lib/postgresql/10/main/base/18036/69116_fsm (deleted)
postgres 8927 postgres 65u REG 8,1 24576 262943 /var/lib/postgresql/10/main/base/18036/69110_fsm (deleted)
postgres 8927 postgres 66u REG 8,1 0 262940 /var/lib/postgresql/10/main/base/18036/69110 (deleted)
postgres 8927 postgres 67u REG 8,1 8192 263851 /var/lib/postgresql/10/main/base/18036/69110_vm (deleted)
postgres 8927 postgres 68u REG 8,1 0 263336 /var/lib/postgresql/10/main/base/18036/69119 (deleted)
postgres 8927 postgres 69u REG 8,1 0 263341 /var/lib/postgresql/10/main/base/18036/69120 (deleted)
postgres 8927 postgres 70u REG 8,1 16777216 263791 /var/lib/postgresql/10/main/pg_wal/000000010000000B0000003A (deleted)
postgres 8927 postgres 72u REG 8,1 0 263759 /var/lib/postgresql/10/main/base/18036/68891 (deleted)
postgres 8927 postgres 73u REG 8,1 0 263781 /var/lib/postgresql/10/main/base/18036/68892 (deleted)
postgres 8927 postgres 74u REG 8,1 0 263340 /var/lib/postgresql/10/main/base/18036/68889 (deleted)
postgres 8927 postgres 75u REG 8,1 1073741824 263822 /var/lib/postgresql/10/main/base/18036/68889.1 (deleted)
postgres 8927 postgres 76u REG 8,1 1073741824 263824 /var/lib/postgresql/10/main/base/18036/68889.2 (deleted)
postgres 8927 postgres 77u REG 8,1 1073741824 263826 /var/lib/postgresql/10/main/base/18036/68889.3 (deleted)
postgres 8927 postgres 78u REG 8,1 1073741824 263828 /var/lib/postgresql/10/main/base/18036/68889.4 (deleted)
postgres 8927 postgres 79u REG 8,1 1073741824 263831 /var/lib/postgresql/10/main/base/18036/68889.5 (deleted)
postgres 8927 postgres 80u REG 8,1 1073741824 263834 /var/lib/postgresql/10/main/base/18036/68889.6 (deleted)
postgres 8927 postgres 81u REG 8,1 1073741824 263838 /var/lib/postgresql/10/main/base/18036/68889.7 (deleted)
postgres 8927 postgres 82u REG 8,1 1073741824 263841 /var/lib/postgresql/10/main/base/18036/68889.8 (deleted)
postgres 8927 postgres 83u REG 8,1 1073741824 263844 /var/lib/postgresql/10/main/base/18036/68889.9 (deleted)
postgres 8927 postgres 84u REG 8,1 1073741824 263847 /var/lib/postgresql/10/main/base/18036/68889.10 (deleted)
postgres 8927 postgres 85u REG 8,1 0 263308 /var/lib/postgresql/10/main/base/18036/69118 (deleted)
postgres 8927 postgres 86u REG 8,1 0 263085 /var/lib/postgresql/10/main/base/18036/68896 (deleted)
postgres 8927 postgres 87u REG 8,1 0 263305 /var/lib/postgresql/10/main/base/18036/69116 (deleted)
postgres 8927 postgres 88u REG 8,1 1073741824 263823 /var/lib/postgresql/10/main/base/18036/69116.1 (deleted)
postgres 8927 postgres 89u REG 8,1 1073741824 263825 /var/lib/postgresql/10/main/base/18036/69116.2 (deleted)
postgres 8927 postgres 90u REG 8,1 1073741824 263827 /var/lib/postgresql/10/main/base/18036/69116.3 (deleted)
postgres 8927 postgres 91u REG 8,1 1073741824 263829 /var/lib/postgresql/10/main/base/18036/69116.4 (deleted)
postgres 8927 postgres 92u REG 8,1 1073741824 263832 /var/lib/postgresql/10/main/base/18036/69116.5 (deleted)
postgres 8927 postgres 93u REG 8,1 1073741824 263835 /var/lib/postgresql/10/main/base/18036/69116.6 (deleted)
postgres 8927 postgres 94u REG 8,1 1073741824 263839 /var/lib/postgresql/10/main/base/18036/69116.7 (deleted)
postgres 8927 postgres 95u REG 8,1 1073741824 263842 /var/lib/postgresql/10/main/base/18036/69116.8 (deleted)
postgres 8927 postgres 96u REG 8,1 1073741824 263845 /var/lib/postgresql/10/main/base/18036/69116.9 (deleted)
postgres 8927 postgres 97u REG 8,1 1073741824 263848 /var/lib/postgresql/10/main/base/18036/69116.10 (deleted)
postgres 8927 postgres 99u REG 8,1 0 263342 /var/lib/postgresql/10/main/base/18036/69121 (deleted)
postgres 8927 postgres 100u REG 8,1 1073741824 263016 /var/lib/postgresql/10/main/base/18036/68889.11 (deleted)
postgres 8927 postgres 101u REG 8,1 1073741824 263074 /var/lib/postgresql/10/main/base/18036/69116.11 (deleted)
postgres 8927 postgres 102u REG 8,1 1073741824 263020 /var/lib/postgresql/10/main/base/18036/68889.12 (deleted)
postgres 8927 postgres 103u REG 8,1 1073741824 263113 /var/lib/postgresql/10/main/base/18036/69116.12 (deleted)
postgres 8927 postgres 104u REG 8,1 1073741824 263082 /var/lib/postgresql/10/main/base/18036/68889.13 (deleted)
postgres 8927 postgres 105u REG 8,1 1073741824 263133 /var/lib/postgresql/10/main/base/18036/69116.13 (deleted)
postgres 8927 postgres 106u REG 8,1 1073741824 263131 /var/lib/postgresql/10/main/base/18036/68889.14 (deleted)
postgres 8927 postgres 107u REG 8,1 1073741824 263136 /var/lib/postgresql/10/main/base/18036/69116.14 (deleted)
postgres 8927 postgres 108u REG 8,1 1073741824 259461 /var/lib/postgresql/10/main/base/18036/68889.15 (deleted)
postgres 8927 postgres 109u REG 8,1 1073741824 263153 /var/lib/postgresql/10/main/base/18036/69116.15 (deleted)
postgres 8927 postgres 110u REG 8,1 50221056 263135 /var/lib/postgresql/10/main/base/18036/68889.16 (deleted)
postgres 8927 postgres 111u REG 8,1 49745920 263155 /var/lib/postgresql/10/main/base/18036/69116.16 (deleted)
postgres 9058 postgres 78u REG 8,1 24576 262826 /var/lib/postgresql/10/main/base/18036/68886_fsm (deleted)
postgres 9058 postgres 79u REG 8,1 0 263332 /var/lib/postgresql/10/main/base/18036/68886 (deleted)
postgres 9058 postgres 80u REG 8,1 8192 263850 /var/lib/postgresql/10/main/base/18036/68886_vm (deleted)
postgres 9058 postgres 81u REG 8,1 0 263781 /var/lib/postgresql/10/main/base/18036/68892 (deleted)
postgres 9058 postgres 82u REG 8,1 0 263782 /var/lib/postgresql/10/main/base/18036/68893 (deleted)
postgres 9058 postgres 83u REG 8,1 0 263783 /var/lib/postgresql/10/main/base/18036/68894 (deleted)
postgres 9058 postgres 84u REG 8,1 0 263085 /var/lib/postgresql/10/main/base/18036/68896 (deleted)
postgres 9058 postgres 85u REG 8,1 24576 262943 /var/lib/postgresql/10/main/base/18036/69110_fsm (deleted)
postgres 9058 postgres 86u REG 8,1 0 262940 /var/lib/postgresql/10/main/base/18036/69110 (deleted)
postgres 9058 postgres 87u REG 8,1 8192 263851 /var/lib/postgresql/10/main/base/18036/69110_vm (deleted)
postgres 9058 postgres 88u REG 8,1 0 263336 /var/lib/postgresql/10/main/base/18036/69119 (deleted)
postgres 9058 postgres 89u REG 8,1 0 263341 /var/lib/postgresql/10/main/base/18036/69120 (deleted)
postgres 9058 postgres 90u REG 8,1 0 263342 /var/lib/postgresql/10/main/base/18036/69121 (deleted)
postgres 9058 postgres 91u REG 8,1 16777216 263313 /var/lib/postgresql/10/main/pg_wal/000000010000000B000000E8 (deleted)
postgres 9087 postgres 106u REG 8,1 0 263332 /var/lib/postgresql/10/main/base/18036/68886 (deleted)
postgres 9087 postgres 107u REG 8,1 0 263781 /var/lib/postgresql/10/main/base/18036/68892 (deleted)
postgres 9087 postgres 108u REG 8,1 0 263782 /var/lib/postgresql/10/main/base/18036/68893 (deleted)
postgres 9087 postgres 109u REG 8,1 0 263783 /var/lib/postgresql/10/main/base/18036/68894 (deleted)
postgres 9087 postgres 110u REG 8,1 0 263085 /var/lib/postgresql/10/main/base/18036/68896 (deleted)
postgres 9087 postgres 111u REG 8,1 0 262940 /var/lib/postgresql/10/main/base/18036/69110 (deleted)
postgres 9087 postgres 112u REG 8,1 0 263336 /var/lib/postgresql/10/main/base/18036/69119 (deleted)
postgres 9087 postgres 113u REG 8,1 0 263341 /var/lib/postgresql/10/main/base/18036/69120 (deleted)
postgres 9087 postgres 114u REG 8,1 0 263342 /var/lib/postgresql/10/main/base/18036/69121 (deleted)
postgres 9500 postgres 56u REG 8,1 0 263332 /var/lib/postgresql/10/main/base/18036/68886 (deleted)
postgres 9500 postgres 57u REG 8,1 0 263781 /var/lib/postgresql/10/main/base/18036/68892 (deleted)
postgres 9500 postgres 58u REG 8,1 0 263782 /var/lib/postgresql/10/main/base/18036/68893 (deleted)
postgres 9500 postgres 59u REG 8,1 0 263783 /var/lib/postgresql/10/main/base/18036/68894 (deleted)
postgres 9500 postgres 60u REG 8,1 0 263085 /var/lib/postgresql/10/main/base/18036/68896 (deleted)
postgres 9500 postgres 62u REG 8,1 0 263759 /var/lib/postgresql/10/main/base/18036/68891 (deleted)
postgres 9500 postgres 63u REG 8,1 0 263340 /var/lib/postgresql/10/main/base/18036/68889 (deleted)
postgres 9500 postgres 64u REG 8,1 1073741824 263822 /var/lib/postgresql/10/main/base/18036/68889.1 (deleted)
postgres 9500 postgres 65u REG 8,1 1073741824 263824 /var/lib/postgresql/10/main/base/18036/68889.2 (deleted)
postgres 9500 postgres 66u REG 8,1 1073741824 263826 /var/lib/postgresql/10/main/base/18036/68889.3 (deleted)
postgres 9500 postgres 67u REG 8,1 1073741824 263828 /var/lib/postgresql/10/main/base/18036/68889.4 (deleted)
postgres 9500 postgres 68u REG 8,1 1073741824 263831 /var/lib/postgresql/10/main/base/18036/68889.5 (deleted)
postgres 9500 postgres 69u REG 8,1 1073741824 263834 /var/lib/postgresql/10/main/base/18036/68889.6 (deleted)
postgres 9500 postgres 70u REG 8,1 1073741824 263838 /var/lib/postgresql/10/main/base/18036/68889.7 (deleted)
postgres 9500 postgres 71u REG 8,1 1073741824 263841 /var/lib/postgresql/10/main/base/18036/68889.8 (deleted)
postgres 9500 postgres 72u REG 8,1 1073741824 263844 /var/lib/postgresql/10/main/base/18036/68889.9 (deleted)
postgres 9500 postgres 73u REG 8,1 1073741824 263847 /var/lib/postgresql/10/main/base/18036/68889.10 (deleted)
postgres 9500 postgres 74u REG 8,1 0 262940 /var/lib/postgresql/10/main/base/18036/69110 (deleted)
postgres 9500 postgres 75u REG 8,1 0 263336 /var/lib/postgresql/10/main/base/18036/69119 (deleted)
postgres 9500 postgres 76u REG 8,1 0 263341 /var/lib/postgresql/10/main/base/18036/69120 (deleted)
postgres 9500 postgres 77u REG 8,1 0 263342 /var/lib/postgresql/10/main/base/18036/69121 (deleted)
postgres 9500 postgres 80u REG 8,1 8192 263851 /var/lib/postgresql/10/main/base/18036/69110_vm (deleted)
postgres 9500 postgres 81u REG 8,1 0 263308 /var/lib/postgresql/10/main/base/18036/69118 (deleted)
postgres 9500 postgres 82u REG 8,1 0 263305 /var/lib/postgresql/10/main/base/18036/69116 (deleted)
postgres 9500 postgres 83u REG 8,1 1073741824 263823 /var/lib/postgresql/10/main/base/18036/69116.1 (deleted)
postgres 9500 postgres 84u REG 8,1 1073741824 263825 /var/lib/postgresql/10/main/base/18036/69116.2 (deleted)
postgres 9500 postgres 85u REG 8,1 1073741824 263827 /var/lib/postgresql/10/main/base/18036/69116.3 (deleted)
postgres 9500 postgres 86u REG 8,1 1073741824 263829 /var/lib/postgresql/10/main/base/18036/69116.4 (deleted)
postgres 9500 postgres 87u REG 8,1 1073741824 263832 /var/lib/postgresql/10/main/base/18036/69116.5 (deleted)
postgres 9500 postgres 88u REG 8,1 1073741824 263835 /var/lib/postgresql/10/main/base/18036/69116.6 (deleted)
postgres 9500 postgres 89u REG 8,1 1073741824 263839 /var/lib/postgresql/10/main/base/18036/69116.7 (deleted)
postgres 9500 postgres 90u REG 8,1 1073741824 263842 /var/lib/postgresql/10/main/base/18036/69116.8 (deleted)
postgres 9500 postgres 91u REG 8,1 1073741824 263845 /var/lib/postgresql/10/main/base/18036/69116.9 (deleted)
postgres 9500 postgres 92u REG 8,1 1073741824 263848 /var/lib/postgresql/10/main/base/18036/69116.10 (deleted)
postgres 9573 postgres 31u REG 8,1 0 262940 /var/lib/postgresql/10/main/base/18036/69110 (deleted)
postgres 9573 postgres 32u REG 8,1 0 263336 /var/lib/postgresql/10/main/base/18036/69119 (deleted)
postgres 9573 postgres 33u REG 8,1 0 263341 /var/lib/postgresql/10/main/base/18036/69120 (deleted)
postgres 9573 postgres 34u REG 8,1 0 263342 /var/lib/postgresql/10/main/base/18036/69121 (deleted)
postgres 9589 postgres 22u REG 8,1 0 262940 /var/lib/postgresql/10/main/base/18036/69110 (deleted)
postgres 9589 postgres 23u REG 8,1 0 263336 /var/lib/postgresql/10/main/base/18036/69119 (deleted)
postgres 9589 postgres 24u REG 8,1 0 263341 /var/lib/postgresql/10/main/base/18036/69120 (deleted)
postgres 9589 postgres 25u REG 8,1 0 263342 /var/lib/postgresql/10/main/base/18036/69121 (deleted)
It seems that a lot of space is still used up by (deleted) files from Postgres database. How can I free that disc space?
Addas
(11 rep)
Jul 30, 2019, 11:50 AM
• Last activity: Sep 19, 2025, 10:03 PM
0
votes
1
answers
289
views
Can not Truncate the database because “secondaries has no log was added”
I am a newbie to SQL Server. I got an error when trying to truncate the database: >Database can not shrink until all secondaries have moved past the point where the log was added. When I check the status of the secondary on the primary machine (Always On High Availability Dashboard), it has >heath:...
I am a newbie to SQL Server. I got an error when trying to truncate the database:
>Database can not shrink until all secondaries have moved past the point where the log was added.
When I check the status of the secondary on the primary machine (Always On High Availability Dashboard), it has
>heath: good and
>Synchronization state: Synchronizing.
But the status of database on secondary is NULL not ONLINE. When I check on the Always On High Availability of the Secondary machine, it's state is No data available on secondary replicas.
**Question:** What should I do and what is going on with the database..?
(I want to truncate because the free space is no more enough. I use E:\ for all databases file has used 421.5Gb and free 78.5Gb.)
Here is the nearly I check the status wiht the query below with two result on primary machine(pic 1) and secondary machine(pic 2):
I have tried suspending, off and re-join the secondary replica databases on the secondary machine (APEX-SQL2) but nothing has change...
>Additionally, I ran 3 query below:
*Thank you for reading. Pls help..*
SELECT d.name, r.replica_server_name, ars.role_desc, ars.operational_state_desc,
drs.synchronization_state_desc, ars.connected_state_desc,
drs.last_hardened_lsn, d.log_reuse_wait_desc
FROM sys.availability_databases_cluster AS adc
JOIN sys.databases AS d ON adc.group_database_id = d.group_database_id
JOIN sys.dm_hadr_database_replica_states AS drs ON d.database_id = drs.database_id
JOIN sys.availability_replicas AS r ON drs.replica_id = r.replica_id
JOIN sys.dm_hadr_availability_replica_states ars ON r.replica_id = ars.replica_id





Anh Nguyen
(1 rep)
Apr 10, 2023, 04:58 PM
• Last activity: Sep 19, 2025, 09:06 PM
7
votes
4
answers
888
views
Using a filtered index when setting a variable
I know from other [questions][1] and [posts][2] that when SQL compiles a query plan it can only use a filtered index if the filtered index is guaranteed to be able to be used every time the query runs. This means you can't use a variable in the where clause because sometimes it might be able to use...
I know from other questions and posts that when SQL compiles a query plan it can only use a filtered index if the filtered index is guaranteed to be able to be used every time the query runs. This means you can't use a variable in the where clause because sometimes it might be able to use the filtered index and sometimes not.
One way around this is to use
OPTION(RECOMPILE)
, so that the times it can use it, it'll get the filtered index.
Doing some testing, I found that this query can use a filtered index (note, I'm forcing the index just to prove a point):
SELECT MAX(table1.SomeDateField)
FROM dbo.table1 WITH(INDEX(MyFilteredIndex))
WHERE table1.filteredColumn = @variable
OPTION (RECOMPILE)
However, if I want to assign the result to a variable, I'm bang out of luck:
SELECT @OutputVariable = MAX(table1.SomeDateField)
FROM dbo.table1 WITH(INDEX(MyFilteredIndex))
WHERE table1.filteredColumn = @variable
OPTION (RECOMPILE)
results in:
> Msg 8622, Level 16, State 1, Line 15 Query processor could not produce
> a query plan because of the hints defined in this query. Resubmit the
> query without specifying any hints and without using SET FORCEPLAN.
The query can clearly use the filtered index as it runs find when I don't want to save the output to a variable.
I have ways of rewriting this query to hard-code @variable
to remove the problem, but can someone explain why the first query can use the filtered index, but the 2nd query can't?
Greg
(3292 rep)
Dec 18, 2020, 01:18 AM
• Last activity: Sep 19, 2025, 08:22 PM
0
votes
1
answers
134
views
record type has not been registered when using row in postgresql select block subquery
when I tried to use Row function in select subquery in PostgreSQL 14.8 like this: select (col1).* from ( select (SELECT ROW(1,2) as col) as col1 FROM newtable tk ) a shows error `SQL Error [42809]: ERROR: record type has not been registered`, this is the `newtable` DDL: CREATE TABLE public.newtable...
when I tried to use Row function in select subquery in PostgreSQL 14.8 like this:
select (col1).* from (
select (SELECT ROW(1,2) as col) as col1 FROM newtable tk
) a
shows error
SQL Error : ERROR: record type has not been registered
, this is the newtable
DDL:
CREATE TABLE public.newtable (id varchar NULL);
the ROW did not support in subquery like this? I also tried:
select (col1).f1 from (
select (SELECT ROW(1,2) as col) as co11 FROM newtable
)
this sql works fine:
select (col1) from (
select (SELECT ROW(1,2) as col) as co11 FROM newtable
)
what should I do to fixed this issue? how to get the col in outer query, I have tried to searching from google,seems no one facing the same issue(with subquery).
Dolphin
(939 rep)
Apr 8, 2025, 11:22 AM
• Last activity: Sep 19, 2025, 08:02 PM
-1
votes
1
answers
274
views
connection String and Listener
I am using SQL server 2017, I will like to know how Connection String can be use to talk via Listener to Databases rather talking directly to the DB
I am using SQL server 2017, I will like to know how Connection String can be use to talk via Listener to Databases rather talking directly to the DB
KIBALA keys
(1 rep)
Mar 13, 2020, 07:41 PM
• Last activity: Sep 19, 2025, 07:07 PM
0
votes
1
answers
283
views
Mysql grant restricted to specific hours of a day
I'd like to know if it is possible in mysql to grant select permission to user but only within specific hours of a day. I have created several views and new user and want to let them browse those mysql views only (for example) between 10pm and 11pm.
I'd like to know if it is possible in mysql to grant select permission to user but only within specific hours of a day.
I have created several views and new user and want to let them browse those mysql views only (for example) between 10pm and 11pm.
Piotrekchorzow
Jan 22, 2022, 12:30 PM
• Last activity: Sep 19, 2025, 05:09 PM
0
votes
1
answers
2309
views
Combine multiple rows into one single column (MS Access)
I am trying to build a query from a table (Tbl_XYZ) that will give me the information for the primary key all in one row rather then creating dups for that primary key and showing up with multiple rows. Example below.( this is just a sample) | Account Number(PK) | System_Name | Table_Name| |:---- |:...
I am trying to build a query from a table (Tbl_XYZ) that will give me the information for the primary key all in one row rather then creating dups for that primary key and showing up with multiple rows. Example below.( this is just a sample)
| Account Number(PK) | System_Name | Table_Name|
|:---- |:------: | -----: |
| 012345 | DW | Customer |
| 012345 | DW | Sales |
| 012345 | DW | Address |
| 012345 | AWS | aws_Cust |
| 012345 | AWS | aws_product |
I was thinking about creating 2 separate tables for System_Name and Table_Name but it would still give me values in different rows since the table names are different.
My expectation :
( Of course, I will have to create separate queries for different system names (DW and AWS)
| Account Number | System_Name | Table_Name|
|:---- |:------: | -----: |
| 012345 | DW | Customer / Sales / Address |
| 012345 | AWS | aws_Cust / aws_product |
Any help with this would be greatly appreciated.
I am using MS Access 2013
Thanks.
RayJ
(1 rep)
Apr 6, 2022, 02:29 PM
• Last activity: Sep 19, 2025, 03:09 PM
0
votes
1
answers
14
views
SQL Loader rejecting date: column not allowed here When importing date
I am trying to import a file using SQL Loader in Toad and am receiving an error message "ORA-00984 column not allowed here". I previously was getting an error message that there was a non-numeric value and now it's switched to this. The last thing I tried was using `TO_DATE(substr(:EFFECTIVEDATE,1,1...
I am trying to import a file using SQL Loader in Toad and am receiving an error message "ORA-00984 column not allowed here". I previously was getting an error message that there was a non-numeric value and now it's switched to this.
The last thing I tried was using
TO_DATE(substr(:EFFECTIVEDATE,1,10),'MM-DD-YYYY')
I've tried many other things that I don't remember anymore. I've been at this for 2.5 hours. Below is the table information, ctl information and a few records. One of my other concerns is that some of the dates are in "m/d/yyyy" format.
Here's what I'm using to create the table:
Create Table saphrdw.VP_ACCRUALTRANV42
(PERSONFULLNAME nVarChar2(64),
PERSONNUM nVarChar2(15),
PAYCODENAME nVarChar2(50),
ACCRUALCODENAME nVarChar2(50),
ACCRUALCODESHORTNM Char(25),
TOACCRUALCODENAME nVarChar2(50),
EFFECTIVEDATE date,
ACCRUALTRANTYPE Number(10),
ACCRUALTRANTYPENM nVarChar2(39),
ACCRUALTRANAMOUNT Number(16,6),
LIMITAMOUNT Number(16,6),
GRANTAMOUNT Number(16,6),
CARRYFORWARDAMOUNT Number(16,6),
ORIGINALDATE Date,
ENTEREDONDTM Date,
PROBATIONAMOUNT Number(16,6),
TOPERSONFULLNAME nVarChar2(64),
TOPERSONNUM nVarChar2(15),
ACCRUALCODETYPE Number(10),
HOURSPERDAYINSECS Number(10),
ONPROBATIONSW Number(10),
DISQUALIFIEDSW Number(10),
SUSPENDEDSW Number(10),
EXCLUDEDSW Number(10),
EDITBYRESETSW Number(10),
ALLOWEDITSW Number(10),
CURRPAYPERIODSTART Date,
CURRPAYPERIODEND Date,
PREVPAYPERIODSTART Date,
PREVPAYPERIODEND Date,
NEXTPAYPERIODSTART Date,
NEXTPAYPERIODEND Date,
EMPLOYEEID Number(10),
TOEMPLOYEEID Number(10),
TOACCRUALCODEID Number(10),
ACCRUALTRANID Number(10),
ACCRUALCODEID Number(10),
PAYCODEID Number(10),
DATASOURCEID Number(10),
PERSONID Number(10),
CLIENTUSERNAME nVarChar2(70),
FUNCTIONCODE nVarChar2(2),
DEVICENAME nVarChar2(50),
ENDDATE Date,
FUNCTAREAID Number);
Here's the ctl code:
INTO TABLE SAPHRDW.VP_ACCRUALTRANV42
INSERT
FIELDS TERMINATED BY ',' optionally enclosed by '"'
TRAILING NULLCOLS
(PERSONFULLNAME,
PERSONNUM,
PAYCODENAME,
ACCRUALCODENAME,
ACCRUALCODESHORTNM,
TOACCRUALCODENAME,
EFFECTIVEDATE "mm/dd/yyyy",
ACCRUALTRANTYPE,
ACCRUALTRANTYPENM,
ACCRUALTRANAMOUNT,
LIMITAMOUNT,
GRANTAMOUNT,
CARRYFORWARDAMOUNT,
ORIGINALDATE,
ENTEREDONDTM,
PROBATIONAMOUNT,
TOPERSONFULLNAME,
TOPERSONNUM,
ACCRUALCODETYPE,
HOURSPERDAYINSECS,
ONPROBATIONSW,
DISQUALIFIEDSW,
SUSPENDEDSW,
EXCLUDEDSW,
EDITBYRESETSW,
ALLOWEDITSW,
CURRPAYPERIODSTART,
CURRPAYPERIODEND,
PREVPAYPERIODSTART,
PREVPAYPERIODEND,
NEXTPAYPERIODSTART,
NEXTPAYPERIODEND,
EMPLOYEEID,
TOEMPLOYEEID,
TOACCRUALCODEID,
ACCRUALTRANID,
ACCRUALCODEID,
PAYCODEID,
DATASOURCEID,
PERSONID,
CLIENTUSERNAME,
FUNCTIONCODE,
DEVICENAME,
ENDDATE,
FUNCTAREAID)
Here are a few lines I'm trying to insert:
PERSONFULLNAME,PERSONNUM,PAYCODENAME,ACCRUALCODENAME,ACCRUALCODESHORTNM,TOACCRUALCODENAME,EFFECTIVEDATE,ACCRUALTRANTYPE,ACCRUALTRANTYPENM,ACCRUALTRANAMOUNT,LIMITAMOUNT,GRANTAMOUNT,CARRYFORWARDAMOUNT,ORIGINALDATE,ENTEREDONDTM,PROBATIONAMOUNT,TOPERSONFULLNAME,TOPERSONNUM,ACCRUALCODETYPE,HOURSPERDAYINSECS,ONPROBATIONSW,DISQUALIFIEDSW,SUSPENDEDSW,EXCLUDEDSW,EDITBYRESETSW,ALLOWEDITSW,CURRPAYPERIODSTART,CURRPAYPERIODEND,PREVPAYPERIODSTART,PREVPAYPERIODEND,NEXTPAYPERIODSTART,NEXTPAYPERIODEND,EMPLOYEEID,TOEMPLOYEEID,TOACCRUALCODEID,ACCRUALTRANID,ACCRUALCODEID,PAYCODEID,DATASOURCEID,PERSONID,CLIENTUSERNAME,FUNCTIONCODE,DEVICENAME,ENDDATE,FUNCTAREAID
"M, P",20316,NULL,Vacaciones 2024,NULL,NULL,1/1/2027,16,GRANT EXPIRATION,-162000,0,162000,NULL,00:00.0,NULL,0,NULL,NULL,2,32400,NULL,0,0,0,0,0,00:00.0,00:00.0,00:00.0,00:00.0,00:00.0,00:00.0,63705,NULL,NULL,77267037,210,NULL,NULL,63705,NULL,NULL,NULL,NULL,NULL
"H, T",99664,Vacation,Vacation Earned,VCE,NULL,12/31/2026,1,TAKEN,-43200,NULL,NULL,NULL,00:00.0,23:32.4,NULL,NULL,NULL,1,28800,NULL,0,0,0,0,1,00:00.0,00:00.0,00:00.0,00:00.0,00:00.0,00:00.0,147650,NULL,NULL,83382465,6,178,1036661,147650,ARPERC1,E,NULL,NULL,2
"H, T",99664,Vacation,Vacation Available,VCA,NULL,12/31/2026,1,TAKEN,-43200,NULL,NULL,NULL,00:00.0,23:32.4,NULL,NULL,NULL,1,28800,NULL,0,0,0,0,1,00:00.0,00:00.0,00:00.0,00:00.0,00:00.0,00:00.0,147650,NULL,NULL,83382468,5,178,1036661,147650,ARPERC1,E,NULL,NULL,2
"L, G",10020,Vacation,Vacation Earned,VCE,NULL,12/31/2026,1,TAKEN,-43200,NULL,NULL,NULL,00:00.0,18:40.7,NULL,NULL,NULL,1,28800,NULL,0,0,0,0,1,00:00.0,00:00.0,00:00.0,00:00.0,00:00.0,00:00.0,147752,NULL,NULL,83382657,6,178,1036661,147752,ARPERC1,E,NULL,NULL,2
"L, G",10020,Vacation,Vacation Available,VCA,NULL,12/31/2026,1,TAKEN,-43200,NULL,NULL,NULL,00:00.0,18:40.7,NULL,NULL,NULL,1,28800,NULL,0,0,0,0,1,00:00.0,00:00.0,00:00.0,00:00.0,00:00.0,00:00.0,147752,NULL,NULL,83382659,5,178,1036661,147752,ARPERC1,E,NULL,NULL,2
"L, G",10020974,Vacation,Vacation Earned,VCE,NULL,12/30/2026,1,TAKEN,-14400,NULL,NULL,NULL,00:00.0,18:40.7,NULL,NULL,NULL,1,28800,NULL,0,0,0,0,1,00:00.0,00:00.0,00:00.0,00:00.0,00:00.0,00:00.0,147752,NULL,NULL,83382656,6,178,1036661,147752,ARPERC1,E,NULL,NULL,2
Stephanie
(1 rep)
Sep 18, 2025, 09:33 PM
• Last activity: Sep 19, 2025, 02:47 PM
Showing page 1 of 20 total questions