Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
0 answers
22 views
Streaming dataframe in pl/python from postgres?
The current reporting software we use only interacts with SQL databases. However, many of our datasets are now stored in formats other than SQL. I was thinking if it's possible to directly load and stream compressed Pandas DataFrames through PostgreSQL's PL/Python functions or procedures, rather tha...
The current reporting software we use only interacts with SQL databases. However, many of our datasets are now stored in formats other than SQL. I was thinking if it's possible to directly load and stream compressed Pandas DataFrames through PostgreSQL's PL/Python functions or procedures, rather than ingesting complete archives into large tables within PostgreSQL and querying from there. Some of our queries require processing numerous compressed columnar data dumps, making it impractical to hold all results in memory and return them in one go. I am aware that this approach is doable, but I am uncertain whether somehow PostgreSQL supports streaming data or if it can only return all data in one shot. I didn't find an answer in the documentation. Any insights from the community would be really appreciated. Update: Looks like the answer is no. With simple yield, results are not returned, until all related archives have been iterated. With a draft prototype that load data using fast_parquet the outcome is impressive, am loading data in a combined manner from both a postgres table and a set of parquet archives, based on the archive's archiving time (today's data is only available in PG database, stable data are available in parquet archives). I noticed that the parquet archive loads times faster than querying the postgres table (query is fully index backed, maybe crosstab func is slow).This's kinda weird history data load snappy, while today's data crawls. Since every time, we only extract a dozen columns from the archive, the memory looks quite manageable right now. Caching becomes an issue when months long data is queried and dozens of archives needs to be opened, but no data is returned until all have been extracted.
Ben (169 rep)
Apr 8, 2025, 08:14 AM • Last activity: Apr 14, 2025, 08:57 AM
1 votes
0 answers
40 views
Any issues with EDB Postgres, plpython3u, Psycopg on Apple Silicon?
I currently have an old Intel i7 based iMac. I plan to upgrade to the M4 Mac Mini (Apple silicon). I am running Postgres and use plPython (plpython3u) internally for procedures and triggers. Externally I use psycopg to access the DB from Python. I use an ODBC driver (Actual Technologies) to access t...
I currently have an old Intel i7 based iMac. I plan to upgrade to the M4 Mac Mini (Apple silicon). I am running Postgres and use plPython (plpython3u) internally for procedures and triggers. Externally I use psycopg to access the DB from Python. I use an ODBC driver (Actual Technologies) to access the DB from MS Excel. Does anyone know if there are any issues running the above tech stack on Apple Silicon ? I'd hate to spend all that money and end up with a huge pile of trouble. Update: I got the M4 Mac. I will update with any issues I find. 1. The EDB PostgreSQL install had issues. Error from EDB Stack Builder: OS version not supported
Crashmeister (161 rep)
Nov 2, 2024, 04:47 PM • Last activity: Nov 14, 2024, 09:27 PM
0 votes
0 answers
19 views
PostgreSQL via plpython3u trigger - nrows() returns 0 instead of 1
Given the following code, when executed, the notify_date is updated as expected, but nrows() returns 0. Per the documentation: nrows() Returns the number of rows processed by the command. Note that this is not necessarily the same as the number of rows returned. For example, an UPDATE command will s...
Given the following code, when executed, the notify_date is updated as expected, but nrows() returns 0. Per the documentation: nrows() Returns the number of rows processed by the command. Note that this is not necessarily the same as the number of rows returned. For example, an UPDATE command will set this value but won't return any rows (unless RETURNING is used). This plan is in the global data: resetTargetNotifyDate = plpy.prepare(""" UPDATE target_tbl SET notify_date = null WHERE expression like '%' || $1 || '%' """, ['text'] ) funcs['resetTargetNotifyDate'] = resetTargetNotifyDate This is the trigger: create or replace function code_change() returns trigger language plpython3u AS $$ """ If code-group = 'Swap' then: Reset notify_date for all targets where expression contains 'Swap:' + code """ if not "trade" in GD: plpy.execute("select define_trade_globals()") logger = GD['generic']['logger'] trPlan = GD['trade']['funcs'] cdRow = TD['new'] if TD['event'] not in ['UPDATE'] or TD['when'] != 'BEFORE': plpy.error(f"Bad event or timing for trigger .") if cdRow['code_group'] == 'Swap': currUser = plpy.execute("select current_user")["current_user"] rslt = plpy.execute(trPlan['resetTargetNotifyDate'], [f"Swap:{cdRow['code']}"]) updCnt = rslt.nrows() logger.info(f"CdChg_b: Event '{TD['event']}', User '{currUser}', Swap for '{cdRow['code']}' caused {updCnt} notify_date resets") return "MODIFY" $$ ; Log: 2024-09-19 12:49:15.886: INFO : plpy.Stocks : CdChg_b: Event 'UPDATE', User 'dba', Swap for 'NMFC-ARR' caused 0 notify_date resets I cannot figure out why I get 0 when 1 row is updated.
Crashmeister (161 rep)
Sep 19, 2024, 04:52 PM
0 votes
1 answers
103 views
After upgrade Postgres python is the same version
I upgraded my postgres from 13.5 to 16.2 version (and the RHEL from 7.5 to 8.9). The problem is, python was not upgraded. I created procedure pyver(): ``` CREATE OR REPLACE FUNCTION pyver () RETURNS TEXT AS $$ import sys pyversion = sys.version return pyversion $$ LANGUAGE 'plpython3u'; ``` For show...
I upgraded my postgres from 13.5 to 16.2 version (and the RHEL from 7.5 to 8.9). The problem is, python was not upgraded. I created procedure pyver():
CREATE OR REPLACE FUNCTION pyver ()
RETURNS TEXT
AS $$
    import sys
    pyversion = sys.version
    return pyversion
$$ LANGUAGE 'plpython3u';
For show of python version used by Postgres. When I run it on Python 13.5, I get this result:
# psql -d database
psql (13.5)
Type "help" for help.

postgres@database # select pyver();
                  pyver
-----------------------------------------
 3.6.8 (default, Aug 13 2020, 07:36:02) +
 [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
(1 řádka)

postgres@database #
And when I run it on upgraded DB, I get this result:
# psql -d database
psql (16.2)
Type "help" for help.

database=# select pyver();
                  pyver
-----------------------------------------
 3.6.8 (default, Jan  5 2024, 09:14:44) +
 [GCC 8.5.0 20210514 (Red Hat 8.5.0-20)]
(1 row)

database=#
On the new RedHat is Python 3.9.18 installed. How do I get postgres to use a newer version of python? Upgrade was made by this way: - was installed new server with RedHat 8.9 (Python 3.9) - was installed Postgres binaries (version 13.5) and created user postgres - then was connected disk with Postgres databases (version 13.5) from old server - the DB's was started on new server - next step was installation of Postgres 16.2 - then was start with upgrade of DB's to new Postgres version via this command:
time /usr/pgsql-16/bin/pg_upgrade --jobs=16 -d /postgres/pgsql/database/data13 -D /postgres/pgsql/database/data16 -b /usr/pgsql-13/bin/ -B /usr/pgsql-16/bin/ --link
- after upgrade runs DB's on new version And now we find the proglem with Python. The main DB has 70TB size. Thanks Michal
Misudka (45 rep)
Jun 20, 2024, 05:50 AM • Last activity: Jun 21, 2024, 11:40 AM
-1 votes
1 answers
44 views
Incremental learning PLPython(3)u
Is there a way to use incremental machine learning with plpython3u in postgres database?
Is there a way to use incremental machine learning with plpython3u in postgres database?
AndCh (99 rep)
Jan 18, 2024, 10:27 AM • Last activity: Jan 21, 2024, 06:02 PM
1 votes
1 answers
141 views
How to improve custom aggregate function performance & identify bottlenecks
This is a continuation of a question I asked on the best way to compute statistics on a list of rows unique per column, which can be [found here][1] (along with the table schema) I have a table, which holds millions of rows of stock data and I want to compute custom aggregates on these rows. The ide...
This is a continuation of a question I asked on the best way to compute statistics on a list of rows unique per column, which can be found here (along with the table schema) I have a table, which holds millions of rows of stock data and I want to compute custom aggregates on these rows. The idea is to append each input value in the state transition function. Then the finalfunc will compute the number on this array. The aggregate is defined as:
create or replace aggregate RSI(input float8) (
  SFUNC=tech_float8_accum,
  STYPE=float8[],
  FINALFUNC=RSI_Func
);
Where I have naively implemented the array aggregation function in plpython:
-- Append the next input value to the state array
CREATE OR REPLACE function tech_float8_accum(agg float8[], input float8)
returns float8[]
AS $$
    return agg + [input] if agg != None else [input]
$$ LANGUAGE plpython3u;
The finalfunc is also written in plpython, and from experience this quite fast at least outside of a database context given it uses Cython under the hood.
CREATE OR REPLACE FUNCTION RSI_Func(input float8[], out val float8)
AS $$
    import talib
    import numpy as np
    
    cd=np.array(input)
    rsi = talib.RSI(cd)
    return rsi[-1]
$$ LANGUAGE plpython3u;
Current usage:
select "security", RSI(ordered.close)
		from (
			select "security", close
			from stocks_data.bars
			where "timeframe" = '1d'
			and "timestamp" >= '2022-11-02'::timestamp
			order by "timestamp" asc
		) as ordered
		group by ordered.security;
This takes approx **3 minutes**, when in reality I need something around 3 seconds or less like the built in AVG function offers. Is there something I can do to drastically improve this approach, or should I take another approach altogether? It is too much data to bring in-memory. Analyze of usage query: enter image description here
Eoin Fitzpatrick (13 rep)
Nov 21, 2023, 06:42 AM • Last activity: Nov 22, 2023, 06:55 AM
0 votes
0 answers
66 views
Calculating aggregates grouped by column WITHOUT aggregate functions
I have a table which contains stock data for various companies. The data goes back as far as 2003, and there are approx 40M rows for each timeframe. ``` CREATE TABLE stocks_data.bars ( timeframe varchar(3) NOT NULL, "timestamp" timestamp NOT NULL, "open" float8 NULL, high float8 NULL, low float8 NUL...
I have a table which contains stock data for various companies. The data goes back as far as 2003, and there are approx 40M rows for each timeframe.
CREATE TABLE stocks_data.bars (
	timeframe varchar(3) NOT NULL,
	"timestamp" timestamp NOT NULL,
	"open" float8 NULL,
	high float8 NULL,
	low float8 NULL,
	"close" float8 NULL,
	volume int8 NULL,
	"security" varchar(12) NOT NULL,
	ext bool NOT NULL DEFAULT false,
	realtime bool NOT NULL DEFAULT false,
	CONSTRAINT bars_pkey PRIMARY KEY ("timestamp", security, timeframe)
)
The data looks like so: enter image description here I want to perform some technical calculations on these rows, for each ticker symbol ("AAPL" for example). I am using plpython3u to wrap these technical calculation functions. Now say, I want to calculate a stochastic RSI on all 40M rows, unique for each ticker ("security" column). What would be the ideal approach for handling and passing around this data in postgres? Current naive function:
-- function should take in a list of numbers, then calculate the stochRSI and return these values
CREATE OR REPLACE FUNCTION test_01(input double precision[])
  RETURNS setof double precision
AS $$
    import talib
    
    k, d = talib.STOCHRSI(input)
    return [k, d]
$$ LANGUAGE plpython3u;
I have tried: 1. Using the procedural function as a window function, which fails because it is not a window function:
select *, test_01(close) over (partition by "security") as tech
from stocks_data.bars bars
group by bars."security";
2. Using a lateral join, but this errors because then the function is only passed a single value (stochastic RSI should be calculated on a list of values). There is also no way to separate the calculations by "security" here.
SELECT f.* 
FROM   stocks_data.bars bars, test_01(bars.close) f
WHERE  bars.timeframe = '1d';
Are there any working & *performant* options here?
Eoin Fitzpatrick (13 rep)
Nov 20, 2023, 08:24 PM • Last activity: Nov 20, 2023, 08:24 PM
0 votes
0 answers
75 views
PostgreSQL: Python UDF not shown in query plan obtained by EXPLAIN ANALYZE
I have a question regarding the execution of a Python UDF. Let's suppose I have a UDF named testUDF(...) and I apply it to table "testtable". When I run the query `SELECT testUDF(...) from testtable`, I get the correct result (I omitted the params of the UDF on purpose). However, when I want to chec...
I have a question regarding the execution of a Python UDF. Let's suppose I have a UDF named testUDF(...) and I apply it to table "testtable". When I run the query SELECT testUDF(...) from testtable, I get the correct result (I omitted the params of the UDF on purpose). However, when I want to check the query plan for this query, the UDF is not included in there. I used EXPLAIN ANALYZE SELECT testUDF(...) from testtable and EXPLAIN SELECT testUDF(...) from testtable. In both cases, the output only shows a seq. scan but the UDF is not mentioned anywhere. For more complex queries, I am interested in how to see at which position of the query plan the UDF is executed. How is this possible? Many thanks in advance for your help!
KSV97 (3 rep)
Nov 17, 2022, 03:07 AM • Last activity: Nov 17, 2022, 05:47 AM
3 votes
1 answers
3462 views
Extension "plpythonu" is not supported by Amazon RDS
I am trying to install extension "plpython3u" that supports writing python in postgresql. ```CREATE EXTENSION plpython3u;``` Error: > SQL Error [22023]: ERROR: Extension "plpythonu" is not supported by > Amazon RDS Detail: Installing the extension "plpythonu" failed, > because it is not on the list...
I am trying to install extension "plpython3u" that supports writing python in postgresql.
EXTENSION plpython3u;
Error: > SQL Error : ERROR: Extension "plpythonu" is not supported by > Amazon RDS Detail: Installing the extension "plpythonu" failed, > because it is not on the list of extensions supported by Amazon RDS. > Hint: Amazon RDS allows users with rds_superuser role to install > supported extensions. See: SHOW rds.extensions; Can you recomend a solution? We are running the following version of postgresql: > PostgreSQL 12.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 7.3.1 > 20180712 (Red Hat 7.3.1-12), 64-bit
SQLSERVERDAWG (141 rep)
Feb 25, 2022, 05:21 AM • Last activity: Oct 17, 2022, 09:04 PM
0 votes
1 answers
190 views
Can you run PL/Python or PL/v8 code from the database itself?
It it possible to run code that is stored in the database, on the database. For example, I'd like a trigger to execute a function whose Javascript code is stored in the same database. (Normally, functions are defined in advance.) Is this possible, and if so, how can I do it?
It it possible to run code that is stored in the database, on the database. For example, I'd like a trigger to execute a function whose Javascript code is stored in the same database. (Normally, functions are defined in advance.) Is this possible, and if so, how can I do it?
fadedbee (143 rep)
Jan 13, 2022, 09:59 AM • Last activity: Jan 13, 2022, 10:05 AM
0 votes
1 answers
258 views
Passing a python list into plpy execute for the in operator without string interpolation
I have a plpython query for an 'in' where clause ``` user_ids = [1,2,3] query = "SELECT department from users where id in ($1)" prepared_query = plpy.prepare(query, ['bigint']) # not sure what the type should be if id is bigint plpy.execute(prepared_query, user_ids) ``` The problem is I am unsure wh...
I have a plpython query for an 'in' where clause
user_ids = [1,2,3]
    query = "SELECT department from users where id in ($1)" 
    prepared_query = plpy.prepare(query, ['bigint'])
        # not sure what the type should be if id is bigint
    plpy.execute(prepared_query, user_ids)
The problem is I am unsure what the argument type should be I get errors for different combinations I have tried: - When using the above syntax it was throwing an error because of the commas in the list - when using a bigint[] I got the no operator bigint = bigint[]. - I have tried passing in a comma separated string using ",",join() - I have cast the list to tuple as well which didn't work Has anyone got this to work? It's poorly documented.
Chris Mccabe (101 rep)
Oct 14, 2021, 09:25 AM • Last activity: Oct 14, 2021, 10:22 AM
1 votes
3 answers
976 views
How to get YAML Python Library in PostgreSQL
I would like to use YAML in some plpython code, but YAML is not included in the python3 extension for PostgreSQL. My 'import yaml' gets an error that it cannot find yaml. On my regular Python3 install I did 'pip3 install yaml' which worked fine. How can I get yaml installed into PostgreSQL? Thanks....
I would like to use YAML in some plpython code, but YAML is not included in the python3 extension for PostgreSQL. My 'import yaml' gets an error that it cannot find yaml. On my regular Python3 install I did 'pip3 install yaml' which worked fine. How can I get yaml installed into PostgreSQL? Thanks. Some more info for clarification: Here is the start of a function defined in PG: -- Default audit trigger create or replace function sys_audit() returns trigger language plpython3u AS $$ from sys import path path.append('/usr/local/lib/ez-python-library/PostgreSQL/bin'); from datetime import datetime from CommonRowFunctions import getPkValue, getRowValue, getRowChanges keyVal = '' modData = 'unknown' ... The module 'CommonRowFunctions' tries to use YAML to configure logging. This module lives in my python library (external to PostgreSQL). This all works if I use a properties file for the log config, but using a dictionary is the preferred method and YAML makes that very easy.
Crashmeister (161 rep)
Mar 19, 2018, 03:43 PM • Last activity: Nov 12, 2020, 08:04 AM
2 votes
1 answers
593 views
Can plpython3u open files on the file system?
I am trying to write a plpython3u function that should open a file in the file system and read some values out of it that get returned by a query. But I am getting a permission denied error when doing so. I am well aware of SQL injection and the dangers of mixing up the database with the file system...
I am trying to write a plpython3u function that should open a file in the file system and read some values out of it that get returned by a query. But I am getting a permission denied error when doing so. I am well aware of SQL injection and the dangers of mixing up the database with the file system, this is just for the sake of testing the boundaries for my own knowledge, not for deployment in a production environment. I tried using chmod 777 on the file in question so that anybody can do anything with it, but I still get permission denied when trying to open the file. This is the script in question: CREATE OR REPLACE FUNCTION roof_type_to_name(roof_type TEXT) RETURNS TEXT AS $$ import xml.etree.ElementTree as ET ns = {'gml': "http://www.opengis.net/gml ", 'bldg': "http://www.opengis.net/citygml/building/1.0 "} rooftype_schema = ET.parse(r'/path/to/file.xml') definitions = rooftype_schema.findall(".//gml:Definition", ns) definition_list = list(definitions) for definition in definition_list: prop_list = list(definition) if prop_list.text == roof_type: return prop_list.text $$ LANGUAGE plpython3u; I also know that there are other ways of reading xml with postgres, again I am just using this as a personal learning experience with plpython3u with something I am familiar with. Is it possible at all to open a file on the file system with plpython3u? Or is it totally locked-down for safety reasons?
wfgeo (191 rep)
Jul 1, 2018, 12:15 PM • Last activity: Oct 18, 2020, 12:04 PM
6 votes
2 answers
5004 views
When (or why even) use PLPython(3)u
As I gain more experience with PostgreSQL I start to question the existence of PLPython. It's considered an "untrusted" language https://www.postgresql.org/docs/10/plpython.html What I am wondering is, when or why would anyone need to use this? PLPGSQL is already quite a strong language that allows...
As I gain more experience with PostgreSQL I start to question the existence of PLPython. It's considered an "untrusted" language https://www.postgresql.org/docs/10/plpython.html What I am wondering is, when or why would anyone need to use this? PLPGSQL is already quite a strong language that allows you to do a lot of things. Has anyone here had the need to use it, and if so, for what?
Chessbrain (1223 rep)
Jul 21, 2020, 08:36 AM • Last activity: Jul 27, 2020, 02:03 PM
0 votes
1 answers
129 views
Is there an efficient way to use 3rd party packages in PL/Python?
I've found examples of people importing 3rd party packages into PL/Python scripts, but it seems to me that re-importing libraries upon every run of a stored procedure is terribly inefficient. Does Postgres maintain a process-wide python interpreter that's re-used for subsequent PL/Python script exec...
I've found examples of people importing 3rd party packages into PL/Python scripts, but it seems to me that re-importing libraries upon every run of a stored procedure is terribly inefficient. Does Postgres maintain a process-wide python interpreter that's re-used for subsequent PL/Python script executions? If so, is there any way to have it execute import statements in that context BEFORE a PL/Python procedure runs? Alternatively, would it be possible/practical to provide a C-compatible dynamically-linked library that hosts a persistent Python interpreter?
feuGene (101 rep)
May 16, 2020, 03:43 AM • Last activity: May 16, 2020, 03:46 PM
13 votes
1 answers
3380 views
Why is PL/Python untrusted?
According to the docs: > PL/Python is only available as an "untrusted" language, meaning it does not offer any way of restricting what users can do in it and is therefore named plpythonu. A trusted variant plpython might become available in the future if a secure execution mechanism is developed in...
According to the docs: > PL/Python is only available as an "untrusted" language, meaning it does not offer any way of restricting what users can do in it and is therefore named plpythonu. A trusted variant plpython might become available in the future if a secure execution mechanism is developed in Python. Why exactly is it difficult to develop a secure execution mechanism for Python but not for other languages such as Perl?
foobar0100 (641 rep)
Mar 16, 2016, 05:41 AM • Last activity: Sep 11, 2019, 05:55 AM
0 votes
1 answers
502 views
how can I divide log files for specific DB queries?
Currently my db has a couple of triggers for some (not all) tables that get executed when a specific number of columns has been updated. I keep track if the triggers for those rows on a boolean column in the tables. Id like to be able to log UPDATE and INSERT queries and the errors/warnings coming f...
Currently my db has a couple of triggers for some (not all) tables that get executed when a specific number of columns has been updated. I keep track if the triggers for those rows on a boolean column in the tables. Id like to be able to log UPDATE and INSERT queries and the errors/warnings coming from them onto different files in my postgresql server as a singular log file can get quite big. Ive thought of a couple of approaches to this problem and am looking for a better approach than these; - have the triggers (plpython3) create seperate log files when they only get called (not a lot but they insert millions of rows onto different tables. - write additional triggers that insert logging information onto another table (either by explicit means or a single JSON column) thank you for your time and answers!
Eiron (103 rep)
May 28, 2019, 07:30 PM • Last activity: May 29, 2019, 06:17 AM
2 votes
0 answers
1562 views
installing plpython3u for postgresql 9.6 under windows 7
I've used EnterpriseDB stackbuilder to install postgres9.6 and the language-pack. Postgres9.6 is installed under C:\Program Files folder while the edb lanaguge pack is installed under C:\edb. However, it seems that I cannot create the plpython3u extension because it cannot find some necessary module...
I've used EnterpriseDB stackbuilder to install postgres9.6 and the language-pack. Postgres9.6 is installed under C:\Program Files folder while the edb lanaguge pack is installed under C:\edb. However, it seems that I cannot create the plpython3u extension because it cannot find some necessary modules. How do I get postgres to find my python language pack installation? When I type CREATE EXTENSION plpython3u is says: ERROR: could not load library "C:/Program Files/PostgreSQL/9.6/lib/plpython3.dll": The specified module could not be found. NOTE: I already use Python 3.6 for development, so I'd rather not change the entire installation, I just want to point postgres to the proper runtime so plpython3.dll can load up. Do I have to change some environment variables?
user143369
Jan 27, 2018, 02:02 AM • Last activity: Jan 4, 2019, 12:39 PM
1 votes
1 answers
187 views
plpythonu: Read
The application "foo" uses this plpythonu source code to read the custom variable `foo.transaction_id`. I guess this is way too complicated. How to shorten/simplify below lines? txid_list = list(plpy.execute( '''SELECT current_setting FROM current_setting('foo.transaction_id')''')) txid_str = txid_l...
The application "foo" uses this plpythonu source code to read the custom variable foo.transaction_id. I guess this is way too complicated. How to shorten/simplify below lines? txid_list = list(plpy.execute( '''SELECT current_setting FROM current_setting('foo.transaction_id')''')) txid_str = txid_list ['current_setting'] txid = int(txid_str)
guettli (1591 rep)
Jan 19, 2018, 09:58 AM • Last activity: Jan 26, 2018, 04:16 PM
1 votes
0 answers
221 views
Require Language In PostgreSQL Extension Control File
I'm developing a custom extension that uses plpythonu in several functions. I'd like to be able to require that plpythonu be installed in order to create my extension, but I don't see anything in the documentation about requiring a language via the control file. I tried adding plpythonu to the 'requ...
I'm developing a custom extension that uses plpythonu in several functions. I'd like to be able to require that plpythonu be installed in order to create my extension, but I don't see anything in the documentation about requiring a language via the control file. I tried adding plpythonu to the 'requires' statement in the control file but it looks for an extension named plpythonu rather than the language. Can this be done?
spencerrecneps (373 rep)
Sep 15, 2015, 01:18 AM • Last activity: Sep 28, 2016, 01:15 PM
Showing page 1 of 20 total questions