I am running with the below issue while taking pg_dump(PG14) of a table with bytea column.
pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 1460154641 The command was: COPY TO stdout;
The table "ABC" in concern is just 60MB large(total size) and it has a bytea column.
But the error says is not able to allocate a request size of 1.3GB. What are we missing here? Could you please help? Thanks.
Update: I was able to take backup of the table using below command without error.
COPY public.abc TO stdout WITH (FORMAT binary);
--Successful execution
But below command fails:
COPY public.abc TO stdout ;
ERROR: invalid memory alloc request size 1480703501
Even the select query return same error :
select * from ABC
ERROR: invalid memory alloc request size 1480703501
How did it even allow bytea column to be inserted with more than 1GB. Tha table is just 60Mb large and with just 1 row.
-
Could be data corruption, could be a lot of large objects.Laurenz Albe– Laurenz Albe2023年08月02日 10:12:47 +00:00Commented Aug 2, 2023 at 10:12
-
Even the "select" query runs with same error. Looks like the bytea column might be larger than 1GB. But how did it allow to insert such big row in the first place? There is a single row in the table.Sajith P Shetty– Sajith P Shetty2023年08月03日 08:57:02 +00:00Commented Aug 3, 2023 at 8:57
1 Answer 1
(削除) Your assessment that the table is only 60MB must be wrong. You don't tell us how you made that determination, so I don't where it went wrong. (削除ここまで) But it might be that small with compression.
The value's size is <1GB but the string representation of it is >1GB; so it can be stored, but it can't be escaped for normal output. Assuming bytea_output is set to 'hex' (the default) that means the raw value is about half of 1480703501 long.
The value could have been set using binary input, or it could have been constructed in place using bytea operations or functions(like bytea's ||
), or for some values it could have been set using the escape format, where plain ASCII non-control characters can be input with only one character per byte.
People usually store very large files in the filesystem, and then store the file name in the database table. If it really needs to be in the database where it can be protected and/or replicated by WAL, then large objects are the only alternative (in PostgreSQL) that I know of.
-
2Hi @jjanes, Thanks for taking time into this. I am really surprised that you are confident about me mentioning about 60MB-sized table as wrong. No I am 100% sure the total size of the relation in concern is 60MB with toast, index and the data. Isnt there possibility that since it is stored as binary (-compressed) it could be smaller in size.Sajith P Shetty– Sajith P Shetty2023年08月03日 14:56:53 +00:00Commented Aug 3, 2023 at 14:56
-
I agree that may be value's size is still <1GB, one way i can fix this is, to change the datatype from bytea to maybe a large object(lo-out of line). Can you suggest other options if any?Sajith P Shetty– Sajith P Shetty2023年08月03日 15:04:25 +00:00Commented Aug 3, 2023 at 15:04
-
Right. I'm accustomed to the built-in compression being way too poor to do things like that, but if the data had very low complexity (like 700,000,000 repetitions of 'A') then it could happen for it to have a small table with a much larger value.jjanes– jjanes2023年08月04日 01:44:23 +00:00Commented Aug 4, 2023 at 1:44
-
Thanks for the update.Sajith P Shetty– Sajith P Shetty2023年08月04日 15:58:48 +00:00Commented Aug 4, 2023 at 15:58