Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

3 votes
2 answers
313 views
How to import an image from a file, using OPENROWSET, to a table record?
I am trying to import a small image from a file located on my MS SQL Server to a column in a database setup as `VARBINARY(MAX)`. I have declared a variable as `VARBINARY(MAX)` and would like to assign the image to that variable. Once assigned I have to locate the records and store that image to thos...
I am trying to import a small image from a file located on my MS SQL Server to a column in a database setup as VARBINARY(MAX). I have declared a variable as VARBINARY(MAX) and would like to assign the image to that variable. Once assigned I have to locate the records and store that image to those records. I have a query that works to store the image to the column, but for the life of me I cannot import new images from files on the servers SSD.
USE [Mydb]
DECLARE @ImageFile VARBINARY(MAX)
SELECT @ImageFile = 'Abracon.jpg'
FROM
    OPENROWSET(BULK 'C:\Users\admin\Pictures\Abracon.jpg', SINGLE_BLOB)
    AS BLOB;
Running this query gives me the following error: > Msg 257, Level 16, State 3, Line 16 > >Implicit conversion from data type varchar to varbinary(max) is not allowed. Use the CONVERT function to run this query. I read the OPENROWSET page at learn.microsoft.com, but I still don't see my error. What am I doing wrong?
Gromit (33 rep)
Jul 14, 2025, 10:09 PM • Last activity: Jul 15, 2025, 10:42 PM
0 votes
1 answers
209 views
MariaDB: Writing even 25MB of data into a LONGBLOB is very slow
We recently added MariaDB as a storage option for our tool. Before that we only allowed sqlite. Among other tables I have a very simple temporary file storage table: ```sql create table tmp_file_storage ( uuid varchar(255) not null, creation_date datetime not null, filename varchar(255) null, data_t...
We recently added MariaDB as a storage option for our tool. Before that we only allowed sqlite. Among other tables I have a very simple temporary file storage table:
create table tmp_file_storage
(
    uuid          varchar(255) not null,
    creation_date datetime     not null,
    filename      varchar(255) null,
    data_type     varchar(255) null,
    file          longblob     null
)
In production, writing a single entry into this table with a 25MB file (even if the table is empty), takes about 8 minutes. On my development PC it takes about 2 minutes, which is already way too slow for such a small file. Writing the data currently happens in 2 steps:
INSERT INTO tmp_file_storage(uuid, creation_date, filename, data_type, file)
VALUES ('someuuid', 'YYYY-MM-DD hh:mm:ss', NULL, NULL, NULL);
UPDATE TABLE tmp_file_storage
SET
  filename='filename',
  data_type='xml',
  file='.... 25MB of data ...';
WHERE
  tmp_file_storage.uuid = 'someuuid'
First we thought it could be something about our code, but we were able to reproduce this problem with just sending those queries to MariaDB. I tried using different Storage Engines (InnoDB, MyISAM, Aria) and it always took pretty much the same time. Here is the my.ini in case that matters. We haven't changed a lot of stuff.
[mysqld]
datadir=D:/.../data
innodb_lock_wait_timeout = 120
innodb_log_file_size     = 512M
innodb_log_buffer_size   = 128M
innodb_buffer_pool_size  = 1G
max_allowed_packet       = 500M
The database runs on the same server, and doesn't have a lot of load. Maybe around 10-100 requests a minute. With 25MB already taking so long, something seems to be actively slowing MariaDB down. In sqlite this operation usually takes less than a second. Any ideas?
Tekay37 (101 rep)
Jun 27, 2024, 02:28 PM • Last activity: Jun 17, 2025, 09:04 AM
1 votes
1 answers
515 views
Can JDBC Oracle file stream blob inserts be speeded up?
I have these four different ways by which I am inserting some file data into an Oracle database using JDBC and PreparedStatements. - **Approach A** ps.setBytes(1, fileDataInByteArray); - **Approach B** Blob blob = ps.getConnection().createBlob(); blob.setBytes(1, fileDataInByteArray); ps.setBlob(1,...
I have these four different ways by which I am inserting some file data into an Oracle database using JDBC and PreparedStatements. - **Approach A** ps.setBytes(1, fileDataInByteArray); - **Approach B** Blob blob = ps.getConnection().createBlob(); blob.setBytes(1, fileDataInByteArray); ps.setBlob(1, blob); - **Approach C** ps.setBinaryStream(1, fileDataAsInputStream); - **Approach D** ps.setBlob(1, fileDataAsInputStream); In general, Approach C & D seem to be very useful for large fileData. A & B can run out of memory, but obviously C & D are more reliable. However, when the file size is smaller the danger of OOM is not there. I find that A & B take less time for the database insert. I suppose that’s because when the data is a byteArray, it’s much faster. What can I do to speed up the insert when using C&D? I have tried wrapping the input stream with a BufferedInputSteam. I played around with the buffer size. But that did not help. Are we at the mercy of how fast Oracle can read the data? Can that be speeded up?
Raster R (11 rep)
Jul 6, 2023, 08:45 AM • Last activity: May 30, 2025, 08:04 AM
0 votes
1 answers
3436 views
inserting blob/clob over dblinks
I'm trying to find the best way to load a remote table with a blob column (blob only contains text, so clob would have been better). So far we've been able to reduce loadtime from 218 minutes to 2 minutes using a view on the source table and converting the blob to varchar2 using (simplified): ``` CR...
I'm trying to find the best way to load a remote table with a blob column (blob only contains text, so clob would have been better). So far we've been able to reduce loadtime from 218 minutes to 2 minutes using a view on the source table and converting the blob to varchar2 using (simplified):
CREATE OR REPLACE FORCE EDITIONABLE VIEW "TABLE_V" ("ID", "DATE", "CLOB_PT1", "CLOB_PT2", "CLOB", CONSTRAINT "TABLE_V_PK" PRIMARY KEY ("ID") RELY DISABLE) AS
  SELECT
  "ID",
  "DATE",
  UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(BLOB, 2000, 1)) "CLOB_PT1",
  UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(BLOB, 2000, 2001)) "CLOB_PT2",
  CASE WHEN LENGTH(INHOUD) > 4000 THEN TO_CLOB(BLOB) END "CLOB"
  FROM OWNER.TABLE;
Because we cannot guarantee the size of the blob will continue to fit in our two varchars we put anything bigger into a separate clob column in the view. now the idea is to merge the varchars or clob back to one clob in our staging area. Here come's the challenge of ORA-22992: cannot use LOB locators selected from remote tables Concatenating the two varchars into the targeted clob column works fine.
INSERT 
  /*+  APPEND PARALLEL  */ 
  INTO SA.TESTTABLE
  (
    ID ,
    DATE ,
    CLOB
  ) 
SELECT 
  TABLE_V_1.ID ,
  TABLE_V_1.DATE , 
  TABLE_V_1.CLOB_PT1||TABLE_V_1.CLOB_PT2
FROM 
  OWNER.TABLE_V_1@DB_LNK TABLE_V_1
Inserting the clob into the targeted clob column works fine.
INSERT 
  /*+  APPEND PARALLEL  */ 
  INTO SA.TESTTABLE
  (
    ID ,
    DATE ,
    CLOB
  ) 
SELECT 
  TABLE_V_1.ID ,
  TABLE_V_1.DATE , 
  CLOB
FROM 
  OWNER.TABLE_V_1@DB_LNK TABLE_V_1
Using a 'case when' inserting either the concatenated varchars or the clob column fails!
INSERT 
  /*+  APPEND PARALLEL  */ 
  INTO SA.TESTTABLE
  (
    ID ,
    DATE ,
    CLOB
  ) 
SELECT 
  TABLE_V_1.ID ,
  TABLE_V_1.DATE , 
  CASE WHEN CLOB IS NOT NULL
    THEN TABLE_V_1.CLOB_PT1||TABLE_V_1.CLOB_PT2
    ELSE TABLE_V_1.INHOUD_CLOB
  END
FROM 
  OWNER.TABLE_V_1@DB_LNK TABLE_V_1
I also tried concatenating all three columns by default, (varchars1 and 2 will be empty if size blob >4000), same error. Why is this failing? Any suggestions on getting this to work?
R. Sluiter (1 rep)
Nov 6, 2020, 11:30 AM • Last activity: May 27, 2025, 04:05 PM
-1 votes
1 answers
281 views
vertica/postgres copy text to binary column
Until now I used Oracle to store xml file in blob data type column. Is there any option in Vertica to copy xml file into binary column? I saw there is varbinary data type but, I haven't succeeded to load this file into one row in this column. If someone is familiar with this process in PostgreSQL ca...
Until now I used Oracle to store xml file in blob data type column. Is there any option in Vertica to copy xml file into binary column? I saw there is varbinary data type but, I haven't succeeded to load this file into one row in this column. If someone is familiar with this process in PostgreSQL can maybe be helpful (share your knowledge)
NoamiA (101 rep)
Mar 5, 2020, 10:49 AM • Last activity: May 15, 2025, 08:05 AM
0 votes
1 answers
30 views
What is charset for a Blob?
I'm looking at [Oracle Database Sample Schemas](https://github.com/oracle-samples/db-sample-schemas), and found there is a column of a charset for a column of Blob. https://github.com/oracle-samples/db-sample-schemas/blob/main/customer_orders/co_create.sql#L105 ``` -- Details of goods that customers...
I'm looking at [Oracle Database Sample Schemas](https://github.com/oracle-samples/db-sample-schemas) , and found there is a column of a charset for a column of Blob. https://github.com/oracle-samples/db-sample-schemas/blob/main/customer_orders/co_create.sql#L105
-- Details of goods that customers can purchase

-- auto-generated definition
create table PRODUCTS
(
    PRODUCT_ID         NUMBER generated by default on null as identity
        constraint PRODUCTS_PK
            primary key,
    PRODUCT_NAME       VARCHAR2(255 char) not null,
    UNIT_PRICE         NUMBER(10, 2),
    PRODUCT_DETAILS    BLOB
        constraint PRODUCTS_JSON_C
            check (product_details is json),
    PRODUCT_IMAGE      BLOB,
    IMAGE_MIME_TYPE    VARCHAR2(512 char),
    IMAGE_FILENAME     VARCHAR2(512 char),
    IMAGE_CHARSET      VARCHAR2(512 char),
    IMAGE_LAST_UPDATED DATE
)
/

comment on table PRODUCTS is 'Details of goods that customers can purchase'
/

comment on column PRODUCTS.PRODUCT_ID is 'Auto-incrementing primary key'
/

comment on column PRODUCTS.PRODUCT_NAME is 'What a product is called'
/

comment on column PRODUCTS.UNIT_PRICE is 'The monetary value of one item of this product'
/

comment on column PRODUCTS.PRODUCT_DETAILS is 'Further details of the product stored in JSON format'
/

comment on column PRODUCTS.PRODUCT_IMAGE is 'A picture of the product'
/

comment on column PRODUCTS.IMAGE_MIME_TYPE is 'The mime-type of the product image'
/

comment on column PRODUCTS.IMAGE_FILENAME is 'The name of the file loaded in the image column'
/

comment on column PRODUCTS.IMAGE_CHARSET is 'The character set used to encode the image'
/

comment on column PRODUCTS.IMAGE_LAST_UPDATED is 'The date the image was last changed'
/
What would be the IMAGE_CHARSET column for? > comment on column PRODUCTS.IMAGE_CHARSET is 'The character set used to encode the image' Do we need a charset for storing image bytes to the PRODUCT_IMAGE column?
Jin Kwon (165 rep)
May 6, 2025, 05:57 AM • Last activity: May 7, 2025, 07:22 AM
1 votes
1 answers
1892 views
SQL Server - Access denied when reading a file from Azure Blob Storage container using SAS key
I have a requirement to read CSV files from an Azure blob storage. So far, this is throwing access denied errors every time I run my query: CREATE DATABASE SCOPED CREDENTIAL WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = 'sv=2021-06-08&ss=b&srt=sco&sp=rl&se=2023-03-31T09:38:05Z&st=2022-09-01T02...
I have a requirement to read CSV files from an Azure blob storage. So far, this is throwing access denied errors every time I run my query: CREATE DATABASE SCOPED CREDENTIAL WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = 'sv=2021-06-08&ss=b&srt=sco&sp=rl&se=2023-03-31T09:38:05Z&st=2022-09-01T02:38:05Z...'; CREATE EXTERNAL DATA SOURCE WITH ( TYPE = BLOB_STORAGE , LOCATION = 'https://.blob.core.windows.net/ ' , CREDENTIAL= --> ); SELECT * FROM OPENROWSET ( BULK '/.csv' , DATA_SOURCE = '' , FORMAT ='CSV' , FORMATFILE='/.fmt' , FORMATFILE_DATA_SOURCE = '' , FIRSTROW = 2 ) AS test Below are some more details on how everything was setup: - The storage account kind is of BlockBlobStorage. - In the **Firewalls and virtual networks setting**, it is only **Enabled from selected virtual networks and IP addresses**. I already added my public IP address, as well as the IP address of Azure SQL Server which I got from here: https://learn.microsoft.com/en-us/azure/azure-sql/database/connectivity-architecture?view=azuresql#gateway-ip-addresses - The whole process works if I set it to Enabled from all networks. The SQL server and the storage account lives within the same resource group. - I also configured a VNet that is both added for both of the resource. - Saw this thread which is exactly similar to my issue, however the accepted answer is not working from my end: https://stackoverflow.com/questions/58340185/cannot-bulk-load-because-the-file-file-csv-could-not-be-opened-operating-syst I checked all the documentations regarding SAS access keys, database scoped credentials, external data sources and VNet networking and I don't see any limitations for SAS key access to be denied. Did I miss a configuration setup? I find it a little weird that in most cases, they are recommending to setup the storage account to be **Enabled from all networks**, which might be a security issue.
Dustine Tolete (119 rep)
Oct 3, 2022, 03:38 AM • Last activity: May 5, 2025, 03:06 PM
-1 votes
2 answers
1521 views
Error on updating a field with binary values
I have a problem on emoji showing on webpage after migration to new DB Server. But I have a workaround. I will get the binary values of a certain field of TABLE1 on our OLD DB Server. Then update the TABLE1 on the new server using the script below: ``` UPDATE old_table t1 INNER JOIN new_table t2 ON...
I have a problem on emoji showing on webpage after migration to new DB Server. But I have a workaround. I will get the binary values of a certain field of TABLE1 on our OLD DB Server. Then update the TABLE1 on the new server using the script below:
UPDATE old_table t1 INNER JOIN new_table t2 ON t1.id = t2.id
SET t2.message = CAST(CONCAT("0x",HEX(t1.message)) AS BINARY);
But after executing this my message field on the new table becomes "0x1E395565" If I do this script below, it will show the real message. But doing this for a million of records will take too long to execute hence the UPDATE INNER JOIN:
UPDATE new_table SET message = 0x1E395565 where id =1;
How to update this correctly? I concat "0x" on the HEX(t1.message) because when I do a mysqldump --hex-blob and charset=BINARY. There are 0x on the values. My data type for message field is VARCHAR(1000) I know this might be wrong but is there any workaround without altering the table.
JRA (137 rep)
Feb 18, 2021, 01:48 PM • Last activity: Mar 15, 2025, 08:01 PM
2 votes
2 answers
7330 views
Writing Oracle BLOB field to raw file in a Cursor or For loop result empty file
I am stuck at this for a while and had been Googling for answer but can't seem to find any.... I have an Oracle 9i table with `BLOB` field that I need to extract the `BLOB` field into a raw file on the LINUX server, there are thousand of rows so I need to write a kind of loop to accomplish this. I g...
I am stuck at this for a while and had been Googling for answer but can't seem to find any.... I have an Oracle 9i table with BLOB field that I need to extract the BLOB field into a raw file on the LINUX server, there are thousand of rows so I need to write a kind of loop to accomplish this. I got this : DECLARE i NUMBER := 0; l_file UTL_FILE.FILE_TYPE; l_buffer RAW(32767); l_amount BINARY_INTEGER := 32767; l_pos INTEGER := 1; l_blob BLOB; l_blob_len INTEGER; BEGIN FOR rec IN (SELECT id, fname, fblob FROM tablename where fname like '%png') LOOP i := i + 1; --DBMS_OUTPUT.put_line ('Record ' || i || ' id ' || rec.id); --DBMS_OUTPUT.put_line ('Record ' || i || ' fname ' || rec.fname); l_blob_len := DBMS_LOB.getlength(rec.fblob); -- Open dest file. l_file := UTL_FILE.fopen('BLOBDIR',rec.fname,'w', 32767); -- Read chunks of the BLOB and throw them to the file until done. WHILE l_pos < l_blob_len LOOP DBMS_LOB.read(rec.ATTACHMENT_BLOB, l_amount, l_pos, l_buffer); UTL_FILE.put_raw(l_file, l_buffer, TRUE); l_pos := l_pos + l_amount; END LOOP; END LOOP; -- DBMS_OUTPUT.put_line ('Procedure Looping Example is done'); END; I tested the Loop with the DBMS_OUTPUT and it worked fine. However: * When I ran the code all 10 of the *.png files show up in my BLOBDIR - but the files is empty. * When I ran the code for each individual file (i.e. FOR rec IN (SELECT id, fname, fblob FROM tablename where fname ='filename.png')) - I got the file and it has data. * When I ran the code with only 2 files in the loop (i.e. FOR rec IN (SELECT id, fname, fblob FROM tablename where fname in ('filename.png', 'filename2.png')) - both files got written to the BLOBDIR - but only the first file has data, but the second file show 0 data. I ran this code through TOAD, and through sqlplus - and same result. Can someone help me with this ? Thank you.
lmbNbirt (31 rep)
Aug 21, 2017, 05:09 PM • Last activity: Mar 15, 2025, 11:05 AM
0 votes
1 answers
608 views
How to keep Image data in file system with Postgres database?
I am wondering is there any way to store image type data (binary data) in postgresql not directly into the columns but resides in the file system but not gonna store the path link as text in database but store as binary type data using any kind of 3rd party open source technology or anything else. I...
I am wondering is there any way to store image type data (binary data) in postgresql not directly into the columns but resides in the file system but not gonna store the path link as text in database but store as binary type data using any kind of 3rd party open source technology or anything else. I am using postgresql version 15.
Saadman Karim (21 rep)
Mar 2, 2023, 06:19 AM • Last activity: Feb 22, 2025, 10:09 AM
3 votes
1 answers
2085 views
Updating a table which contains an Image datatype column
I have a table which contains a few columns, one of which is an image datatype, this is used to store BLOB data, which could be anything from a jpg to a large PDF. I have a requirement to update one of the other columns in the table, however the query is taking a ludicrously long time to complete -...
I have a table which contains a few columns, one of which is an image datatype, this is used to store BLOB data, which could be anything from a jpg to a large PDF. I have a requirement to update one of the other columns in the table, however the query is taking a ludicrously long time to complete - that's if it does complete at all. The table is the following format: ID (int), HexData (image), FileName(nvar), Filetype(nvar), folder(nvar), user(int) The query used is simply - update [database].[dbo].[myTable] set user = 5 where id = 2 This is a test table on a test database, not the live system. There are only 5 rows, no indexes (including clustered), and no other tables in the database.
SeanR (257 rep)
Dec 14, 2015, 10:09 AM • Last activity: Feb 11, 2025, 07:19 AM
1 votes
1 answers
89 views
If I replace images in a table, will existing BLOB data be cleaned up and the database shrink?
I recently updated a table in my database by saving small thumbnail images as a `byte[]` to a `varbinary(max)` column. This significantly improved the speed of loading thumbnails (previously I was loading them from the file system), but due to an error in my code where the thumbnail images were bein...
I recently updated a table in my database by saving small thumbnail images as a byte[] to a varbinary(max) column. This significantly improved the speed of loading thumbnails (previously I was loading them from the file system), but due to an error in my code where the thumbnail images were being resized but not compressed, the images were about 140-160KB in size and created a ton of BLOBs. The images should be about 5 or 6 KB. In testing of my new code, the images should be sized correctly at around 5 or 6 KB. If I iterate through the table and replace all of the byte[] images in the varbinary(max) column, will this clean up all these unnecessary BLOBs? My database understandably grew significantly as a result of my error, and I'm trying to get the database to go back down as its current size is creating issues with our backups. --- What I understand of the BLOB data was that it was stored separately in the database and referenced in the varbinary cell in my table, so I wasn't sure if I replaced that if the associated BLOB data would be deleted. If it is automatically deleted, why wouldn't the database automatically shrink? I am testing it in the Dev Environment, but after the last mistake that I made, I wanted to double check. I saw in SSMS that there are options for Shrinking the Database, so I'm assuming that would be the task I should run after updating all of the image data. Of course there will be many backups throughout that process as well. I'm just trying to avoid any additional issues. I have a database that was backup up as about 50MB and grew to about 3.5GB after my error. I can replace the data with smaller images, but it doesn't seem like that would be enough because the database won't automatically shrink.
M_Lyons10 (13 rep)
Feb 6, 2025, 07:00 PM • Last activity: Feb 7, 2025, 06:09 AM
0 votes
1 answers
63 views
Is it better for performance, to create a separate filegroup for tables that contain blobs?
I am currently working on an application that has approximately 500,000 users. Each user has a profile picture averaging around 32KB in size. I am using SqlServer and I want to save these profile pictures in a VARBINARY column in a separate table named `ProfilePictures` (Very much like what is descr...
I am currently working on an application that has approximately 500,000 users. Each user has a profile picture averaging around 32KB in size. I am using SqlServer and I want to save these profile pictures in a VARBINARY column in a separate table named ProfilePictures (Very much like what is described here ). Is it better in terms of performance, to create a separate filegroup for this table? Will this improve performance, and if so, how?
PedroAsking (3 rep)
Jan 26, 2025, 11:26 AM • Last activity: Jan 27, 2025, 02:04 PM
0 votes
2 answers
454 views
Why does my table keep getting corrupted after removing large amounts of data?
I have a database that is about 400 gigs. The majority of the space is consumed by images of utility bills stored in string form as `varchar(max)` in a single table. It's a poor database design that I inherited. I have already exported the utility bills as pdfs to a network folder. Now I am trying t...
I have a database that is about 400 gigs. The majority of the space is consumed by images of utility bills stored in string form as varchar(max) in a single table. It's a poor database design that I inherited. I have already exported the utility bills as pdfs to a network folder. Now I am trying to replace the images in the database with an empty string. I have been updating in small batches of 40k to 80k records at a time. The problem is that after a few batches, the table gets corrupted. I'll get a message that the update failed. When I run CHECKDB I get > Msg 8909, Level 16, State 1, Line 1 Table error It's happened twice so far and I have to restore the database from a backup. I need to know why this is happening and if there is a way to avoid it. I'm doing this in a test environment and I need to get this problem figured out before I can work on the production database. Thank you for your help. Edit: Changed binary to string. One of the images is stored as a continuous string of 65,535 characters. Edit: This is happening on MSSQL 2014 SP3 (Version 12.0.6024.0)
B West (1 rep)
Sep 24, 2020, 03:08 AM • Last activity: Jan 15, 2025, 01:00 AM
1 votes
1 answers
3413 views
LOAD_FILE() works on SELECT, but returns NULL on insert
I'm having troubles using the LOAD_FILE() function in my environment. I'm trying to add a file to a BLOB field but whenever I try to insert or update this field with LOAD_FILE(), this field is being set as NULL. I already checked all common problems which includes system variables, file size and max...
I'm having troubles using the LOAD_FILE() function in my environment. I'm trying to add a file to a BLOB field but whenever I try to insert or update this field with LOAD_FILE(), this field is being set as NULL. I already checked all common problems which includes system variables, file size and max packet Size and permission problems (Both DB and Linux permissions), even some workaround that included moving the file to a Mysql Owned directory, the problem persists. Do note that when using LOAD_FILE in a statement similar to SELECT LOAD_FILE('/foo/bar.jpg'); does work and the file raw data gets displayed, either via CLI or using a DBA tool like DBeaver. Dump of the table:
CREATE TABLE table (
  id int(11) NOT NULL AUTO_INCREMENT,
  string varchar(3) NOT NULL,
  blob blob NOT NULL,
  PRIMARY KEY (id),
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
Each file size I want to insert is around 8kb in size. I'd appreciate any help with this issue.
Gabriel I. (79 rep)
Jul 25, 2019, 04:05 PM • Last activity: Sep 21, 2024, 04:02 PM
13 votes
1 answers
21894 views
Can I do VACUUM FULL to pg_largeobject table?
I have two tables (`table1`, `table2`) in a Postgres 9.1 database. Both having oid type. Each table 1 million records. And `pg_largeobject` table size is around 40GB. I have removed 0.9 million records from each table, and executed the below command. vacuum full analyze table1; vacuum full analyze t...
I have two tables (table1, table2) in a Postgres 9.1 database. Both having oid type. Each table 1 million records. And pg_largeobject table size is around 40GB. I have removed 0.9 million records from each table, and executed the below command. vacuum full analyze table1; vacuum full analyze table2; Still no change in pg_largeobject table size (auto vacuum is enabled) Do I need to execute above command to pg_largeobject table too? Will it impact anything?
RBB (815 rep)
Aug 29, 2015, 08:53 PM • Last activity: Aug 27, 2024, 09:40 PM
0 votes
0 answers
147 views
How does table partitioning size affect query performance?
In my system, I have a set of Oracle 19c database tables which stored temparary data, which are getting expired. I have a solution where I partitioned these tables with the create timestamp of these entries. CREATE TABLE TEMP ( ID VARCHAR (100) NOT NULL, TYPE VARCHAR(100) NOT NULL, CONTEXT_OBJECT BL...
In my system, I have a set of Oracle 19c database tables which stored temparary data, which are getting expired. I have a solution where I partitioned these tables with the create timestamp of these entries. CREATE TABLE TEMP ( ID VARCHAR (100) NOT NULL, TYPE VARCHAR(100) NOT NULL, CONTEXT_OBJECT BLOB, TIME_CREATED NUMBER(19) )partition by range (TIME_CREATED) INTERVAL() In this I have tested that setting the INTERVAL to for a value equivalent to a one day. In this scenarios, partition size is growing about 300 GB during the day. I can see that initial insert queries are file but with the DB load INSERT INTO VALUES (...) ... queries getting sometimes 10 seconds to complete. So I wanted to try reducing the interval to may be 1 hour instead of a day and check if it improves performance of the insert queries. As per the analysis the root cause for the insert query to get slow is due to DW – contention. FYI The index for this table is global indexes. One column also contains a BLOB object too. The reason for partitioning is to drop partitions based on their TIME_CREATED once a expirey time exceeds. Can someone help me figure this out?
LLS (1 rep)
Aug 22, 2024, 02:35 AM • Last activity: Aug 22, 2024, 12:57 PM
0 votes
1 answers
40 views
lo_import realization in libpq of Postgresql
In the current implementation of lo_import, lo_write will be called each time for each 8KB of the file. 8KB is the buffer size set in LO_BUFSIZE. Why is it so small? It slows down the file upload by 30-40% because of lo_write sends request and then waits response from a server.
In the current implementation of lo_import, lo_write will be called each time for each 8KB of the file. 8KB is the buffer size set in LO_BUFSIZE. Why is it so small? It slows down the file upload by 30-40% because of lo_write sends request and then waits response from a server.
Дмитрий Питаков (1 rep)
Jun 17, 2024, 01:13 PM • Last activity: Jun 17, 2024, 02:33 PM
0 votes
1 answers
58 views
How to reference uploaded blob objects in database tables
When there are urls in a database table that represent media files uploaded to my own blob storage: 1) Is it common to link a column relationally to an objects table Ex: - videoHdObjectId: "XXX" - videoSdObjectId: "XXX" 2) Include uploaded urls directly in the table Ex: - videoHdObjectUrl: "https://...
When there are urls in a database table that represent media files uploaded to my own blob storage: 1) Is it common to link a column relationally to an objects table Ex: - videoHdObjectId: "XXX" - videoSdObjectId: "XXX" 2) Include uploaded urls directly in the table Ex: - videoHdObjectUrl: "https:// ..." - videoSdObjectUrl: "https:// ..." 3) Include both objectIds and objectUrls directly in the table? Ex: - videoHdObjectId: "XXX" - videoSdObjectId: "XXX" - videoHdObjectUrl: "https:// ..." - videoSdObjectUrl: "https:// ..." I started with option 1 where I reference the object table via an id, but in my query to my internal database for facebook ads for example, an ad potentially has a videos table with 5 different references to objects (hdVideo, sdVideo, videoPreviewImage, watermakedHdVideo, watermarkedSdVideo), then images has lots of references to originalSize and resized and watermarked images. If I'm querying for 100 ads, that multiplies out to a lot of subqueries. Is it still appropriate to reference objectIds in this way? ---- Rephrased: I currently have a record of all uploaded videos in a table called objects. In my "ads" table, I need to leftJoin it every time. Is it common practice to link to the id of the object to get the url of the file? I ask only because certain tables have 8-10 references to the objects table due for things like sdVideo, hdVideo, watermarkedVideo, videoPreviewImage etc.
SELECT 
    ads.id, 
    obj1.url AS videoHdObjectUrl, 
    obj2.url AS videoSdObjectUrl 
FROM 
    ads 
LEFT JOIN 
    objects obj1 ON ads.videoHdObjectId = obj1.id 
LEFT JOIN 
    objects obj2 ON ads.videoSdObjectId = obj2.id;
wongx (109 rep)
Jun 15, 2024, 07:18 PM • Last activity: Jun 16, 2024, 05:43 PM
2 votes
1 answers
979 views
Choosing the right blocksize for Oracle Database tablespaces with lobs
We are currently switching from an older character set to Unicode using impdp/expdp on Oracle Database 19c. We had to changed several data types. We changed tables with LOBs from basicfile to securefile and found that the tablespace holding this lobs only uses an 8K blocksize. The table containes se...
We are currently switching from an older character set to Unicode using impdp/expdp on Oracle Database 19c. We had to changed several data types. We changed tables with LOBs from basicfile to securefile and found that the tablespace holding this lobs only uses an 8K blocksize. The table containes several TBs with pdfs around 2-3mb. Does it makes sense to increase the Blocksize of the tablespace holding the LOBs from 8k to 32k?
r0tt (1078 rep)
Mar 2, 2021, 04:38 PM • Last activity: May 16, 2024, 04:29 PM
Showing page 1 of 20 total questions