Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

4 votes
3 answers
7962 views
LIKE search in bytea datatype (PostgreSQL)
Table: \d blob Column | Type | Collation | Default ---------+--------------+-----------+------------------------------------------ id | integer | | nextval('modlink_blob_id_seq'::regclass) content | bytea | | This statement: SELECT COUNT(*) "job" INNER JOIN "blob" ON ("job"."input_id" = "blob"."id")...
Table: \d blob Column | Type | Collation | Default ---------+--------------+-----------+------------------------------------------ id | integer | | nextval('modlink_blob_id_seq'::regclass) content | bytea | | This statement: SELECT COUNT(*) "job" INNER JOIN "blob" ON ("job"."input_id" = "blob"."id") WHERE UPPER("blob"."content"::text) LIKE UPPER('%csv%'); Error message: ERROR: invalid memory alloc request size 1989028364 What can I do to make this statement not fail?
guettli (1591 rep)
Oct 15, 2018, 01:35 PM • Last activity: Jul 30, 2025, 01:07 PM
5 votes
3 answers
1078 views
Limiting PostgreSQL logging
I have a table that can contain reasonably large `BYTEA` values (the same could apply to large `TEXT` values). It looks like this: CREATE TABLE testtable ( id SERIAL PRIMARY KEY, name TEXT UNIQUE NOT NULL, -- This is just an example: this could be the PK value BYTEA -- This could be TEXT ); If the a...
I have a table that can contain reasonably large BYTEA values (the same could apply to large TEXT values). It looks like this: CREATE TABLE testtable ( id SERIAL PRIMARY KEY, name TEXT UNIQUE NOT NULL, -- This is just an example: this could be the PK value BYTEA -- This could be TEXT ); If the application using this table tries to insert two rows using the same name, I get this error in the logs (this is an expected error, of course): ERROR: duplicate key value violates unique constraint "testtable_name_key" STATEMENT: INSERT INTO testtable(name, value) VALUES ('a', ...something rather long...); While logging the error is useful, as well as logging the statement (and possibly the value for "name" in this particular context), logging the long BYTEA or TEXT value isn't. In fact, binary data in the logs are escaped in text form (e.g. E'\\377\\327...'), which effectively makes it take even more space than it would in binary form in the database. This can cause more problems than help when the value's size is tens of kB or more. Is there a way to prevent these values from appearing in the logs when there is an error? I'm still interested in knowing there was an error (and preferably have some information about the statement). Of course, this is not just about this particular error and use case, but any error log that could potentially dump multiple kilobytes of uninteresting data in the logs. Otherwise, is there any way to truncate the "STATEMENT: INSERT INTO ..." line? Failing that, would there be any way to keep the ERROR: line but to remove the STATEMENT: line altogether? I've tried to set log_error_verbosity = terse in postgresql.conf, but this doesn't seem to change anything for this. (I'm also using log_destination = 'syslog' to log to rsyslog, so a solution that limits each log "bundle" to a few lines there would be a reasonable compromise.) I'm using PostgreSQL 8.4, but if solutions only present in newer versions would also be interesting.
Bruno (1557 rep)
Nov 28, 2013, 01:17 AM • Last activity: May 3, 2025, 12:14 PM
0 votes
1 answers
608 views
How to keep Image data in file system with Postgres database?
I am wondering is there any way to store image type data (binary data) in postgresql not directly into the columns but resides in the file system but not gonna store the path link as text in database but store as binary type data using any kind of 3rd party open source technology or anything else. I...
I am wondering is there any way to store image type data (binary data) in postgresql not directly into the columns but resides in the file system but not gonna store the path link as text in database but store as binary type data using any kind of 3rd party open source technology or anything else. I am using postgresql version 15.
Saadman Karim (21 rep)
Mar 2, 2023, 06:19 AM • Last activity: Feb 22, 2025, 10:09 AM
9 votes
8 answers
5609 views
Reverse Byte-Order of a postgres bytea field
I'm currently working on a table that contains hashes, stored in bytea format. Converting the hashes to hex-strings however yields the wrong order of bytes. Example: SELECT encode(hash, 'hex') FROM mytable LIMIT 1; Output: 1a6ee4de86143e81 Expected: 813e1486dee46e1a Is there a way to reverse the ord...
I'm currently working on a table that contains hashes, stored in bytea format. Converting the hashes to hex-strings however yields the wrong order of bytes. Example: SELECT encode(hash, 'hex') FROM mytable LIMIT 1; Output: 1a6ee4de86143e81 Expected: 813e1486dee46e1a Is there a way to reverse the order of bytes for all entries?
R. Martin (123 rep)
Nov 29, 2016, 07:37 PM • Last activity: Feb 14, 2025, 05:04 AM
0 votes
1 answers
5545 views
Error message from server: ERROR: invalid memory alloc request size
I am running with the below issue while taking pg_dump(PG14) of a table with bytea column. **pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 1460154641** **The command was: COPY TO stdout;** The table "ABC" in concern is just 60MB large(total size) and it has a by...
I am running with the below issue while taking pg_dump(PG14) of a table with bytea column. **pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 1460154641** **The command was: COPY TO stdout;** The table "ABC" in concern is just 60MB large(total size) and it has a bytea column. But the error says is not able to allocate a request size of 1.3GB. What are we missing here? Could you please help? Thanks. Update: I was able to take backup of the table using below command without error.
COPY public.abc TO stdout WITH (FORMAT binary);
--Successful execution
But below command fails:
COPY public.abc TO stdout ;
ERROR:  invalid memory alloc request size 1480703501
Even the select query return same error :
select * from ABC
ERROR: invalid memory alloc request size 1480703501
How did it even allow bytea column to be inserted with more than 1GB. Tha table is just 60Mb large and with just 1 row.
Sajith P Shetty (312 rep)
Aug 2, 2023, 09:24 AM • Last activity: Feb 13, 2025, 11:04 AM
0 votes
1 answers
1523 views
What is the most efficient way to check whether a BYTEA column contains data, in PostgreSQL?
We have an online platform where we sell eBooks. In the database, we have a "books" table, which contains the books' information. It also contains the book file, stored as BYTEA. In various pages, I need to check if a book's file exists in the database. But I have noticed that the pages load slowly,...
We have an online platform where we sell eBooks. In the database, we have a "books" table, which contains the books' information. It also contains the book file, stored as BYTEA. In various pages, I need to check if a book's file exists in the database. But I have noticed that the pages load slowly, so I was wondering if it is being caused by the manner in which I am checking. Currently, I am checking if "book_file IS NOT NULL". Does PostgreSQL read the whole file, in order to determine whether it is not null, which could be slowing things down? If so, is there another way to check if the file exists, without having to read the whole file? I read up on some binary string functions on the PostgreSQL documentation and was wondering if functions like octet_length(string), substring(string [from int] [for int]) or get_bit(string, offset) might perform better? Below is an example of a simplified query I am using to check for the existence of the file.
SELECT books.book_title
FROM books
WHERE books.book_file_pdf IS NOT NULL
ORDER BY books.book_id DESC
Does this kind of condition slow things down? If so, can this query be optimized, to check solely if the file column is not empty, without reading the file?
Aleksi (1 rep)
Dec 20, 2022, 01:26 PM • Last activity: Jan 30, 2025, 07:05 PM
1 votes
1 answers
1041 views
How to convert bit(n) to bytea when n >= 256?
How can I convert a `BIT(256)` into 64-byte `BYTEA`? More generally, how can I convert from `BIT(n)` to `BYTEA` when `n` is large enough that I can't cast the bits through an `INT8`? Example: I've got a bunch of sha256 hashes stored in a table: ```sql CREATE TABLE foo(sha256 CHARACTER(64) NOT NULL);...
How can I convert a BIT(256) into 64-byte BYTEA? More generally, how can I convert from BIT(n) to BYTEA when n is large enough that I can't cast the bits through an INT8? Example: I've got a bunch of sha256 hashes stored in a table:
CREATE TABLE foo(sha256 CHARACTER(64) NOT NULL); -- hex encoded sha256 digests
INSERT INTO foo (sha256) VALUES 
   ('e643a277fab67cca43d9348623c7551d7382784033a697eb7f5484178be7c53d')
 , ('73483f797a83373fca1b968c11785b98c4fc4803cdc7d3210811ca8b075d6d76')
 , ('b2b91baab6785f6bc6bc43c3a72f700b6bc64c478b744c539c80a857f55b00ca')
 , ('ba8cf9b1f2827f46d45b5061a65a11e93b849174185af9d92508a38b4df04945')
 , ('7507a8f88da14e9bdd5acc310ad260943bc788249a49de210205c1b537f41406')
 , ('868ca27b41d804c4877275595b4a2cf5d85559d6805693f95b68a09a69dc8634')
 , ('f10480d4b52a000b8fd8643147396b869f7c11727623023525b4ca041242356e')
 , ('53ce065eaa1e2f10eb86d10d07f8c748ce482bea9897018862f91cd142bb1eb9')
 , ('4e5d320a6c34bea34f1746bf8a85d00cf77d6d8a3b52d8649c93b2282d3f64e6')
 , ('9f1b957647c30fdd05ad319092226143e437965014235343113dc8dd6a58dda7');

CREATE TABLE bar AS
  SELECT ('x' || sha256)::BIT(256) AS sha256
  FROM foo;

/* Doesn't work:
SELECT sha256::BYTEA FROM bar; -- cannot cast type bit to bytea
*/
Notes: I got the ('x' || str)::BIT(n) trick from https://stackoverflow.com/questions/17208945/whats-the-easiest-way-to-represent-a-bytea-as-a-single-integer-in-postgresql/17214093#17214093
Steven Kalt (73 rep)
Feb 9, 2021, 02:17 AM • Last activity: Aug 12, 2024, 10:04 AM
0 votes
1 answers
1375 views
Slow performance on representing select bytea column query of table in PostgreSQL
Is it possible to improve representing data to client when they send command `select bytea from table limit 150;`. It consumes one minute and half but in `pg_activity` I see "client_write" waiting event. we use 10gbit network. DB version is PostgreSQL 13. When I create a table from that result, it h...
Is it possible to improve representing data to client when they send command select bytea from table limit 150;. It consumes one minute and half but in pg_activity I see "client_write" waiting event. we use 10gbit network. DB version is PostgreSQL 13. When I create a table from that result, it has 285 MB table size. Oddly, select count(*) from (select bytea from table) only takes 10 ms. Is there any tip or a way to improve representing performance?
Melih (284 rep)
Nov 17, 2023, 08:54 AM • Last activity: Nov 17, 2023, 02:29 PM
1 votes
0 answers
2265 views
Optimal index for storing crypto addresses using bytea
I have a simple table that stores Ethereum addresses: ```sql CREATE TABLE address ( id SERIAL PRIMARY KEY, address bytea NOT NULL, attributes text[] NOT NULL DEFAULT '{}'::text[] ); CREATE UNIQUE INDEX address_address_idx ON address(address bytea_ops); ``` The addresses are stored in a column using...
I have a simple table that stores Ethereum addresses:
CREATE TABLE address (
    id SERIAL PRIMARY KEY,
    address bytea NOT NULL,
    attributes text[] NOT NULL DEFAULT '{}'::text[]
);

CREATE UNIQUE INDEX address_address_idx ON address(address bytea_ops);
The addresses are stored in a column using [tag:bytea] type, e.g.
INSERT INTO address (address) VALUES (decode('0ef6F5fc97b1DC5601F858159e1C410Ae2306A47', 'hex'));
and they are looked up using:
EXPLAIN ANALYZE
SELECT encode(address, 'hex')
FROM address
WHERE address = decode('78f294e4a5c73b0b6b230c2e4104e8852f16a3a4', 'hex');
which uses index as expected:
Index Only Scan using address_address_idx on address  (cost=0.42..2.65 rows=1 width=32) (actual time=0.013..0.014 rows=1 loops=1)
  Index Cond: (address = '\x78f294e4a5c73b0b6b230c2e4104e8852f16a3a4'::bytea)
  Heap Fetches: 0
Planning Time: 0.103 ms
Execution Time: 0.026 ms
However, because of how often this query is called, it is among the top 3 most exec time taking queries (according pg_stat_statements). What can I do to increase efficiency of lookup?
Gajus (1334 rep)
May 31, 2021, 11:57 PM • Last activity: May 9, 2023, 09:34 AM
1 votes
1 answers
303 views
Why does PostgreSQL mangle my hex input in a LIKE query?
On some `LIKE` queries on a `bytea` column, PostgreSQL is mangling my input byte sequence. Why could this be happening? ``` Expected query plan: b >= '\x4cbf996e2c527620d067571c6f650643edfc5cdc0bd39c29'::bytea From actual query plan: b >= '\x4cbf996e2c527620d067571c6f650643edfcdc0bd39c29'::bytea ```...
On some LIKE queries on a bytea column, PostgreSQL is mangling my input byte sequence. Why could this be happening?
Expected query plan:    b >= '\x4cbf996e2c527620d067571c6f650643edfc5cdc0bd39c29'::bytea
From actual query plan: b >= '\x4cbf996e2c527620d067571c6f650643edfcdc0bd39c29'::bytea
Notice the difference in the byte sequences (spaces show missing byte):
Expected: \x4cbf996e2c527620d067571c6f650643edfc5cdc0bd39c29
Actual:   \x4cbf996e2c527620d067571c6f650643edfc  dc0bd39c29
Table:
=> \d tbl
                 table "tbl"
 Column |   Type    | Collation | Nullable | Default
--------+-----------+-----------+----------+---------
 id     | int       |           |          |
 b      | bytea     |           |          |
Query:
SELECT * FROM tbl WHERE b LIKE '\x4cbf996e2c527620d067571c6f650643edfc5cdc0bd39c29'::bytea||'%'
Query planner:
EXPLAIN ANALYZE SELECT * FROM tbl WHERE b LIKE '\x4cbf996e2c527620d067571c6f650643edfc5cdc0bd39c29'::bytea||'%';
                                           QUERY PLAN
-----------------------------------------------------------------------------------------------------
 Index Scan using idx on tbl (cost=0.56..8.58 rows=670 width=172) (actual time=0.020..0.021 rows=0 loops=1)
   Index Cond: ((b >= '\x4cbf996e2c527620d067571c6f650643edfcdc0bd39c29'::bytea) AND (b < '\x4cbf996e2c527620d067571c6f650643edfcdc0bd39c2a'::bytea))
   Filter: (b ~~ '\x4cbf996e2c527620d067571c6f650643edfc5cdc0bd39c2925'::bytea)
 Planning Time: 0.090 ms
 Execution Time: 0.033 ms
(5 rows)
D Schlachter (131 rep)
Feb 13, 2023, 05:08 PM • Last activity: Feb 13, 2023, 05:38 PM
1 votes
1 answers
1146 views
LIKE or match by beginning for binary strings in PostgreSQL
I need to match a binary column with its beginning. For text columns it's easy: `LIKE 'image/%'`. This uses BTREE index if any. Is it possible to do this also using an index for a binary column? BTW, I store UTF-8 encoded strings in Binary, because there is no hard standard that all of them are UTF-...
I need to match a binary column with its beginning. For text columns it's easy: LIKE 'image/%'. This uses BTREE index if any. Is it possible to do this also using an index for a binary column? BTW, I store UTF-8 encoded strings in Binary, because there is no hard standard that all of them are UTF-8. So, I store a binary representation (maybe even with \0 (zero) bytes), as I cannot use UTF-8.
porton (745 rep)
Feb 5, 2022, 02:54 AM • Last activity: Dec 25, 2022, 03:00 PM
0 votes
1 answers
463 views
Create, Insert, and return BYTEA value
I'm creating vector tiles using ```ST_AsMVT``` and I would like to store them immediately after they are created in addition to returning the object. The below code works fine at creating a bytea response. ``` WITH mvtgeom AS ( SELECT ST_AsMVTGeom( ST_GeomFromText('POLYGON ((0 0, 10 0, 10 5, 0 -5, 0...
I'm creating vector tiles using
and I would like to store them immediately after they are created in addition to returning the object. The below code works fine at creating a bytea response.
WITH mvtgeom AS
(
SELECT ST_AsMVTGeom(
ST_GeomFromText('POLYGON ((0 0, 10 0, 10 5, 0 -5, 0 0))'),
ST_MakeBox2D(ST_Point(0, 0), ST_Point(4096, 4096)),
4096, 0, false)
)

SELECT ST_AsMVT(mvtgeom.*)
FROM mvtgeom
But when I change the last section to include an INSERT statement:
INSERT INTO test SELECT ST_AsMVT(mvtgeom.*)
FROM mvtgeom
I get the response: > This result object does not return rows Which is of course true, but how to insert data which isn't a row? The table test only has an ID and a BYTEA column.
Tom Shelley (5 rep)
Nov 2, 2022, 05:48 AM • Last activity: Nov 2, 2022, 07:32 AM
45 votes
3 answers
197174 views
Using PostgreSQL 8.4, how to convert bytea to text value in postgres?
In my application I insert data in database using C code, since the strings I receive from an untrusted source I have escaped them using `PQescapeByteaConn` of libpq library. Which is working perfectly fine i.e. results in Octet format String. See below example, Input String : `\n\t\f\b\p\k\j\l\mest...
In my application I insert data in database using C code, since the strings I receive from an untrusted source I have escaped them using PQescapeByteaConn of libpq library. Which is working perfectly fine i.e. results in Octet format String. See below example, Input String : \n\t\f\b\p\k\j\l\mestPrepared Output String : \\012\\011\\014\\010pkjlmestPrepared Output String is inserted in the database. Now I retrieve that data from the database in a java code using JDBC. How I can unescape the string back to its original value ? I thought of two possible approaches, 1. Change the database retrieval query and pass this field to any String manipulation function of postgres i.e. which can convert bytea to text. 2. Do the decoding in Java code. I can understand that the approach 1 will be more efficient. I have tried almost all the functions listed here but nothing is working. Please Help!! I am using version 8.4 of postgres on a linux machine.
Amit (591 rep)
Nov 14, 2013, 10:20 AM • Last activity: Dec 30, 2020, 10:47 PM
12 votes
2 answers
43343 views
How to convert a bytea column to text?
**how to convert a bytea column to text in PostgreSQL so that I can read the column properly in PGADMIN?** I have the following SQL query in the PGADMIN's query editor: ``` SELECT event_type, created_at, encode(metadata::bytea, 'escape') FROM public.events ORDER BY created_at DESC LIMIT 100 ``` Howe...
**how to convert a bytea column to text in PostgreSQL so that I can read the column properly in PGADMIN?** I have the following SQL query in the PGADMIN's query editor:
SELECT event_type, created_at, encode(metadata::bytea, 'escape') 
FROM public.events 
ORDER BY created_at DESC
LIMIT 100
However, it produces an encoded column with each records more or less ressembling the following output:
\203t\00000some_textd\000some_other_textd\0000
How can I get rid of this encoded, so that I only see the original value of the column, in the text format:
some_text some_data
What I have also tried:
SELECT event_id, event_type, created_at, decode((encode(metadata, 'escape')::text), 'escape')
FROM public.events
ORDER BY created_at DESC
LIMIT 100
But in the above case, the query returns a decode column of type bytea and I only see the field [binary data] for each record of the column. I have also tried the first two answers mentioned here without success and can't properly translate the last answer to my query.
ryanzidago (395 rep)
Aug 1, 2020, 06:50 PM • Last activity: Nov 25, 2020, 07:29 PM
1 votes
1 answers
6627 views
How to prevent implicit casting from bytea to text during the current connection in postgres?
In Postgres `bytea` values are automatically converted to `text` when inserted to `text`/`varchar` columns, based on the `bytea_output` setting. I am working with some program code which automatically converts certain values (binary strings) in the program to a bytea format. The problem is that user...
In Postgres bytea values are automatically converted to text when inserted to text/varchar columns, based on the bytea_output setting. I am working with some program code which automatically converts certain values (binary strings) in the program to a bytea format. The problem is that users might accidentally try to insert these values into a text column. By default, Postgres will allow this, but this cannot always work smoothly - for example if there are non-ASCII bytes. I think users may not realise the strange insert behaviour is due to their use of a binary string in the calling program. Therefore, if a bytea to text conversion happens, I want Posgres to raise an exception. I am aware of CREATE CAST, but as I understand, this is a system-wide action. I do not want to change the system behaviour for other connections. I could do CREATE CAST followed by DROP CAST but this seems dirty to me as it still not contained within the connection. *How do I make (implicit) casts from bytea to text throw an exception only within the current connection?* The sql is issued automatically, so I can add a preceding SQL statement before every statement, that's no problem. I was a little surprised by this behaviour because Postgres usually errs on the side of strictness, which I like. This question follows from my previous question: - https://dba.stackexchange.com/questions/270293/
user183966 (115 rep)
Jul 3, 2020, 03:32 PM • Last activity: Jul 6, 2020, 10:56 AM
0 votes
1 answers
338 views
How to prevent different connections treating bytea in same query differently?
I find that identical postgresql queries issued by two different clients are handled differently. Specifically, bytea values are inserted differently. A query that demonstrated the behaviour is this: ```INSERT INTO "TestTable" ("Input") VALUES (decode('74657374', 'hex'))``` 74657374 is hexadecimal f...
I find that identical postgresql queries issued by two different clients are handled differently. Specifically, bytea values are inserted differently. A query that demonstrated the behaviour is this:
INTO "TestTable" ("Input") VALUES (decode('74657374', 'hex'))
74657374 is hexadecimal for 'test'. In one client, 'test' is inserted into the "Input" field, whether that field is text/varchar or bytea. That is the behaviour I desire. In another client, '\x74657374' is inserted into the "Input" field, whether it is text/varchar or bytea. This string is the postgresql literal representation of the bytea bytes of ASCII 'test'. But the sql literal syntax itself is inserted. I do not desire this behaviour. I use a single piece of hand-written SQL, the bytea value only occurs "within" the query (if column "Input" has type Text then none of literals nor the receiving column have bytea type), and it seems unlikely to me that either client is parsing then rebuilding the query. Therefore it seems that the difference must be happening on the server where the query is executed. That means there must be some connection specific configuration setting which is altering the server behaviour. *Can anyone please tell me what connection specific settings could be altering this behaviour?* I know the queries are really behaving differently and it is not a display issue etc., because I can see the rows in the same table having different values ('test' and '\x74657374'). I have tried various alternative bytea handling methods, but they are all affected by this problem. For those who are interested, the "good" client is pgAdminIII and the "bad" client is Ruby PG gem. Though for the reason I gave above I believe there must be some built-in feature of postgresql supporting this behaviour.
user183966 (115 rep)
Jul 3, 2020, 12:18 AM • Last activity: Jul 3, 2020, 06:15 AM
22 votes
3 answers
10639 views
How can I generate a random bytea
I would like to be able to generate random `bytea` fields of arbitrary length (<1Gb) for populating test data. What is the best way of doing this?
I would like to be able to generate random bytea fields of arbitrary length (<1Gb) for populating test data. What is the best way of doing this?
Jack Douglas (40537 rep)
Aug 15, 2012, 10:08 AM • Last activity: Mar 24, 2020, 09:06 AM
0 votes
2 answers
130 views
Optimize user segmentation by bytea column
First i would too explain my problem: I have around 10 millions of customers in the table. Based on customer attributes, we are creating segments. My table what joins customer with segments looks like customer_segmet(customer_id int, segment_id int) Every 3 hours we have a cronjob, what refreshing s...
First i would too explain my problem: I have around 10 millions of customers in the table. Based on customer attributes, we are creating segments. My table what joins customer with segments looks like customer_segmet(customer_id int, segment_id int) Every 3 hours we have a cronjob, what refreshing segments and reinsert them again to this table. Each refresh means, delete all customers from segment, and insert again. Deleting customers are fast enough, but i have problem with insert. Inserting 10mln of rows to this table takes 20 minutes. It is too much and im trying to resolve this problem. **Idea:** I'm thinking to optimize inserts. My idea is remove customer_segment table and instead of this, create "customers" column in segments table. This new column will be bytea type. To store information if user is in segment, i will generate each time binary file where: * Bit position = customer_id * Bit value = is customer in segment or not. For my calculation looks like for 10mln of customers i need 10mln of bits what is 1.25MB only. So default for this column will be 1.25mb data of nulls. After that if i want to add only one customer to the segment, i will execute query: UPDATE segments SET customers = set_bit(customers, 555, 1); where 555 is my customer ID. If i want to check if customer is in the segment, i can do it by query: SELECT COUNT(1) FROM segment WHERE get_bit(customers, 5555) = 1 AND id = 1 Also to simplify the joins between Customer-Segment i created function: CREATE OR REPLACE FUNCTION get_segment_customers(segment BYTEA) RETURNS SETOF integer AS $$ for (let i = 0; i < segment.length; i++) { if(0 === segment[i]){ continue; } for(let bit = 0; bit < 8; bit++){ if(!(segment[i] & ( 1 << bit))){ continue; } plv8.return_next((i * 8) + bit); } } $$ LANGUAGE plv8; And my query to select customers: select _c.* from get_segment_customers((SELECT data FROM segments where id = 1)) _t JOIN customer _c ON _c.id = _t; I made test and is fast. Only about what im scared is query: UPDATE segments SET customers = set_bit(customers, 555, 1); I'm not sure, if PostgreSQL each time of this update will overwrite full file or only one bit in the hard disk? If someone can answer question above and give the feedback about idea and potential problems, i will be happy. Thank you
bordeux (101 rep)
Feb 1, 2020, 01:30 PM • Last activity: Feb 2, 2020, 04:57 PM
0 votes
0 answers
524 views
Writing CSV directly to a BYTEA column from PostgreSQL tables
Assuming these two tables: CREATE TABLE data (day DATE, kpi INTEGER); CREATE TABLE reports (csv BYTEA); Is it possible to (pseudocode): INSERT INTO reports (csv) (SELECT to_csv(SELECT day, kpi FROM data)); with the result being a CSV stored as BYTEA in table reports, column csv? I specifically do no...
Assuming these two tables: CREATE TABLE data (day DATE, kpi INTEGER); CREATE TABLE reports (csv BYTEA); Is it possible to (pseudocode): INSERT INTO reports (csv) (SELECT to_csv(SELECT day, kpi FROM data)); with the result being a CSV stored as BYTEA in table reports, column csv? I specifically do not want to go through the filesystem (COPY, etc.).
crdb (1 rep)
Apr 18, 2019, 04:49 AM
6 votes
2 answers
21165 views
Export image file from bytea column
I am trying to export an image file from a PostgreSQL database.The `users_data_circulation` table has a `photo` (bytea) column. My command: copy (select encode(photo,'hex') from users_data_circulation limit 1) TO '/tmp/imagetest.hext'; In operating system: $> xxd -p -r /tmp/imagetest.hex > /tmp/imag...
I am trying to export an image file from a PostgreSQL database.The users_data_circulation table has a photo (bytea) column. My command: copy (select encode(photo,'hex') from users_data_circulation limit 1) TO '/tmp/imagetest.hext'; In operating system: $> xxd -p -r /tmp/imagetest.hex > /tmp/imagetest.jpg $> file /tmp/imagetest.jpg /tmp/imagetest.jpg: ASCII TEXT I can't open the jpg file. How can I convert this file to jpg? devinim@devinimpostgresql:~$ hexdump -C /tmp/image.hex | more 00000000 5c 5c 33 37 37 5c 5c 33 33 30 5c 5c 33 37 37 5c |\\377\\330\\377\| 00000010 5c 33 34 30 5c 5c 30 30 30 10 4a 46 49 46 5c 5c |\340\\000.JFIF\\| 00000020 30 30 30 01 02 5c 5c 30 30 30 5c 5c 30 30 30 01 |000..\\000\\000.| 00000030 5c 5c 30 30 30 01 5c 5c 30 30 30 5c 5c 30 30 30 |\\000.\\000\\000| 00000040 5c 5c 33 37 37 5c 5c 33 34 31 5c 5c 30 30 30 5c |\\377\\341\\000\| 00000050 76 50 49 43 5c 5c 30 30 30 02 5c 6e 5c 6e 01 5c |vPIC\\000.\n\n.\| 00000060 5c 30 30 30 5c 5c 33 37 37 5c 5c 33 37 36 5c 5c |\000\\377\\376\\| 00000070 30 30 30 21 50 69 63 74 75 72 65 20 45 6c 65 6d |000!Picture Elem| 00000080 65 6e 74 73 2c 20 49 6e 63 2e 20 49 53 45 2f 53 |ents, Inc. ISE/S| ... continues like that
Melih (284 rep)
Oct 1, 2016, 07:34 AM • Last activity: Feb 6, 2019, 08:47 AM
Showing page 1 of 20 total questions