Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
917
views
Migrating from MariaDB to Mysql - Duplicate Constraint Name
I am attempting to migrate from MariaDB to MySQL by doing a `mysqldump` from MariaDB and then restoring it to MySQL (`mysql:latest` docker container). I am getting the following error when importing into MySQL: ERROR 3822 (HY000) at line 172: Duplicate check constraint name 'CONSTRAINT_1'. If I look...
I am attempting to migrate from MariaDB to MySQL by doing a
mysqldump
from MariaDB and then restoring it to MySQL (mysql:latest
docker container).
I am getting the following error when importing into MySQL:
ERROR 3822 (HY000) at line 172: Duplicate check constraint name 'CONSTRAINT_1'.
If I look at the mysqldump file I can see why this is happening. All boolean columns in my database have a constraints that looks something like:
CONSTRAINT CONSTRAINT_1
CHECK (bool_col_1
in (0,1))
CONSTRAINT CONSTRAINT_2
CHECK (bool_col_2
in (0,1))
CONSTRAINT CONSTRAINT_3
CHECK (bool_col_3
in (0,1))
These constraints were not explicitly created by me but implicitly by Flask-SQLAlchemy (I think).
Notice how the constraint names are incremented starting with CONSTRAINT_1. Well the problem is that each table starts incrementing its constraint names starting with CONSTRAINT_1. Thus the error I am seeing gets thrown when trying to create the second table. According to the MySQL docs , duplicate constraint names are not allowed. Apparently MariaDB allows them.
**Is there a way to rename these constraints systematically or an alternative way to migrate the data?**
The quantity of tables prohibits manually changing the name of each constraint.
Note: This fiddle tests duplicate constraint names and executes without error on MySQL. However if I run the same commands on a fresh MySQL container, it fails with the duplicate constraint.
noslenkwah
(101 rep)
May 27, 2020, 04:43 PM
• Last activity: Jul 7, 2025, 12:45 PM
0
votes
2
answers
182
views
Database design question: Tags with values(duration) linked to them
I am creating a database that takes in daily inputs such as: - Date - Events in that date (e.g. project foo: 5 hours, bar: 2 hours, etc...) - Rating of the date There's more but these are probably the only relevant inputs regarding the problem. The problem is how do I construct my tables so that I h...
I am creating a database that takes in daily inputs such as:
- Date
- Events in that date (e.g. project foo: 5 hours, bar: 2 hours, etc...)
- Rating of the date
There's more but these are probably the only relevant inputs regarding the problem.
The problem is how do I construct my tables so that I have stored somehow the specific duration an event lasts. Currently what I have is:
- A users table storing user log in credentials (1)
- A rating table to store input date, rating of the date, etc... (2)
- A tags table storing unique tags (3)
I will use the tags and ratings to look for possible patterns in the future in case that's relevant.
I don't find it difficult at all if I just had to deal with tags alone related to specific dates, as I would just have to have an unique table of tags and associate the tag_id with a date. But I have no clue how I store the duration of a tag on a specific day efficiently. I could of course only use a single table, accept redundancy with overlapping tags on different days and store tags, events and duration in a single string field. But that doesn't seem efficient to me.
I am very novice when it comes to databases and I appreciate the help.
I am using Python 3.7.2, SQLite and Flask-SQLAlchemy 2.4.0 to construct the tables.
Like so:
(1)
class User(db.Model):
__tablename__ = 'user'
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False)
email = db.Column(db.String(120), unique=True, nullable=False)
password_hash = db.Column(db.String(128))
(2)
class Rating(db.Model):
__tablename__ = 'rating'
id = db.Column(db.Integer, primary_key=True)
date = db.Column(db.Date, unique=False, nullable=False)
rating_day = db.Column(db.Integer, nullable=False)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False)
# etc...
(3)
class Tags(db.Model):
__tablename__ = 'tags'
id = db.Column(db.Integer, primary_key=True)
tag = db.Column(db.Integer, unique=True, nullable=False)
Librarian
(9 rep)
Jul 24, 2019, 04:32 PM
• Last activity: Jun 30, 2025, 08:02 PM
0
votes
1
answers
643
views
Ram consumed by single query in Postgres using SqlAlchemy
I have a list of 36 queries ranging from low complexity(No joins) to high complexity(joins with subquery). I want to find the RAM Consumed by each query. This is required for performance testing for the application which I am building. Can anyone please help with the same? I am creating a SQLAlchemy...
I have a list of 36 queries ranging from low complexity(No joins) to high complexity(joins with subquery). I want to find the RAM Consumed by each query.
This is required for performance testing for the application which I am building.
Can anyone please help with the same?
I am creating a SQLAlchemy session object and executing raw queries directly.
Devesh Krishnani
(1 rep)
Sep 11, 2019, 06:05 PM
• Last activity: May 30, 2025, 12:03 PM
1
votes
0
answers
53
views
Aurora PostgreSQL Severe Performance Degradation Under Concurrent Load
**Environment:** - Database: AWS Aurora PostgreSQL - ORM: SQLAlchemy - API Framework: Python FastAPI **Issue:** I'm experiencing significant query performance degradation when my API receives concurrent requests. I ran a performance test comparing single execution vs. concurrent execution of the sam...
**Environment:**
- Database: AWS Aurora PostgreSQL
- ORM: SQLAlchemy
- API Framework: Python FastAPI
**Issue:**
I'm experiencing significant query performance degradation when my API receives concurrent requests. I ran a performance test comparing single execution vs. concurrent execution of the same query, and the results are concerning.
**Real-World Observations:**
When monitoring our production API endpoint during load tests with 100 concurrent users, I've observed concerning behavior:
When running the same complex query through PGAdmin without concurrent load, it consistently completes in ~60ms
However, during periods of high concurrency (100 simultaneous users), response times for this same query become wildly inconsistent:
Some executions still complete in 60-100ms
Others suddenly take up to 2 seconds
No clear pattern to which queries are slow
**Test Results:**
Single query execution time: 0.3098 seconds
Simulating 100 concurrent clients - all requests starting simultaneously...
Results Summary:
Total execution time: 32.7863 seconds
Successful queries: 100 out of 100
Failed queries: 0
Average query time: 0.5591 seconds (559ms)
Min time: 0.2756s, Max time: 1.9853s
Queries exceeding 500ms threshold: 21 (21.0%)
50th percentile (median): 0.3114s (311ms)
95th percentile: 1.7712s (1771ms)
99th percentile: 1.9853s (1985ms)
With 100 concurrent threads:
- Each query takes ~12.4x longer on average (3.62s vs 0.29s)
- Huge variance between fastest (0.5s) and slowest (4.8s) query
- Overall throughput is ~17.2 queries/second (better than sequential, but still
concerning)
**Query Details:**
The query is moderately complex, involving:
Several JOINs across multiple tables, a subquery using EXISTS, ORDER BY and LIMIT clauses.
**My Setup**
**SQLAlchemy Configuration:**
engine = create_async_engine(
settings.ASYNC_DATABASE_URL,
echo=settings.SQL_DEBUG,
pool_pre_ping=True,
pool_use_lifo=True,
pool_size=20,
max_overflow=100,
pool_timeout=30,
pool_recycle=30,
)
AsyncSessionLocal = async_sessionmaker(
bind=engine,
class_=AsyncSession,
expire_on_commit=False,
autocommit=False,
autoflush=False,
)
**FastAPI Dependency:**
async def get_db() -> AsyncGenerator[AsyncSession, None]:
"""Get database session"""
async with AsyncSessionLocal() as session:
try:
yield session
await session.commit()
except Exception:
await session.rollback()
raise
**Questions:**
- **Connection Pool Settings:** Are my SQLAlchemy pool settings appropriate for handling 100 concurrent requests? What would be optimal?
- **Aurora Configuration:** What Aurora PostgreSQL parameters should I tune to improve concurrent query performance?
- **Query Optimization:** Is there a standard approach to optimize complex queries with JOINs and EXISTS subqueries for better concurrency?
- **ORM vs Raw SQL:** Would bypassing SQLAlchemy ORM help performance?
Any guidance or best practices would be greatly appreciated. I'd be happy to provide additional details if needed.
**Update:**
**Hardware Configuration**
1. Aurora regional cluster with 1 instance
2. Capacity Type: Provisioned (Min: 0.5 ACUs (1GiB), Max: 16 ACUs (32 GiB))
3. Storage Config: Standard
**Performance Insights**
1. Max ACU utilization: 70%
2. Max CPU Utilization: 45%
3. Max DB connection: 111
4. EBS IO Balance: 100%
5. Buffer Cache Hit Ratio: 100%
Abhishek Tyagi
(11 rep)
May 20, 2025, 07:18 PM
• Last activity: May 21, 2025, 02:50 PM
-1
votes
1
answers
385
views
Using Indexing When Performing JOINs on Part of a Composite Key
I have a table in my database (hosted on MariaDB) that looks like this (I have given the definition in SQLAlchemy), class Address(Base): __tablename__ = 'address' __table_args__ = {'schema': DB_NAME} referenceID = sqlalchemy.Column(sqlalchemy.String(length=25), primary_key=True) referenceTable = sql...
I have a table in my database (hosted on MariaDB) that looks like this (I have given the definition in SQLAlchemy),
class Address(Base):
__tablename__ = 'address'
__table_args__ = {'schema': DB_NAME}
referenceID = sqlalchemy.Column(sqlalchemy.String(length=25), primary_key=True)
referenceTable = sqlalchemy.Column(sqlalchemy.String(length=25), primary_key=True)
sourceReferenceID = sqlalchemy.Column(sqlalchemy.String(length=25), primary_key=True)
addressType = sqlalchemy.Column(sqlalchemy.String(length=25), primary_key=True)
zip = sqlalchemy.Column(sqlalchemy.String(length=10))
streetAddress = sqlalchemy.Column(sqlalchemy.String(length=50))
city = sqlalchemy.Column(sqlalchemy.String(length=25))
fullAddress = sqlalchemy.Column(sqlalchemy.String(length=200))
country = sqlalchemy.Column(sqlalchemy.String(length=25))
validFrom = sqlalchemy.Column(sqlalchemy.Date)
validTo = sqlalchemy.Column(sqlalchemy.Date)
Now, it can be seen that my table has a composite key which includes several attributes.
As far as I know, a clustered index will be created out of the primary key of a table. I am not completely aware of how this works when the table consists of a composite key. An explanation on this would be appreciated.
Further, I am going to want to perform several JOINs on this table, particularly using the
referenceID
and sourceReferenceID
attributes of the table. To do this efficiently, do I need to create indexes (non-clustered) on these attributes separately using index=True
?
Minura Punchihewa
(99 rep)
Mar 10, 2022, 08:16 AM
• Last activity: Apr 28, 2025, 05:09 PM
0
votes
0
answers
264
views
I keep getting asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation in my Sqlalchemy application
I have SQLAlchemy setup with pgbouncer, my pgboucer is configured to use session mode and my sqlalchemy engine config is as follows: ```python engine = create_async_engine( url=db_url, echo=False, pool_size=200, max_overflow=20, pool_timeout=30, pool_pre_ping=True, pool_recycle=1800, pool_reset_on_r...
I have SQLAlchemy setup with pgbouncer, my pgboucer is configured to use session mode and my sqlalchemy engine config is as follows:
engine = create_async_engine(
url=db_url,
echo=False,
pool_size=200,
max_overflow=20,
pool_timeout=30,
pool_pre_ping=True,
pool_recycle=1800,
pool_reset_on_return=None,
poolclass=AsyncAdaptedQueuePool,
connect_args={"prepared_statement_cache_size": 0, 'server_settings': {'jit': 'off'}}
)
Session = sessionmaker(
bind=engine,
class_=AsyncSession,
expire_on_commit=False,
autoflush=False,
autocommit=False,
)
My session usage is:
@asynccontextmanager
async def get_session_context() -> AsyncSession: # type: ignore
async with Session() as session:
if session is None:
raise Exception("Database session is None")
try:
yield session
except Exception as e:
LOGGER.error(pprint.pprint(e, indent=4, depth=4))
await session.rollback()
raise e
finally:
await session.close()
but I keep getting this error .exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
It seems sqlalchemy closes the database connection while it is still in use. The reason why I do not think the connection is getting closed from the pgbouncer side is because I tried using sqlalchemy with the database directly and I get the same error. it seems the connection is getting closed somehow.
Starbody
(3 rep)
Feb 11, 2025, 08:50 PM
0
votes
2
answers
1569
views
SQLAlchemy: table version already exists. pgAdmin4 fails on Linux Mint 20 and 20.1
*I already found the solution, I just want to share it in case someone has the same problem.* ### Situation ### Install pgAdmin4 version 4.30 on Linux Mint 20 (Ulyana) and Linux Mint 20.1 (Ulyssa) inside virtualenv. The bulk of pgAdmin4 is a Python web application written using the Flask framework....
*I already found the solution, I just want to share it in case someone has the same problem.*
### Situation ###
Install pgAdmin4 version 4.30 on Linux Mint 20 (Ulyana) and Linux Mint 20.1 (Ulyssa) inside virtualenv. The bulk of pgAdmin4 is a Python web application written using the Flask framework. Therefore, it is an ideal candidate to be implemented in a Python virtualenv as indicated in:
[How to Install PostgreSQL with pgAdmin4 on Linux Mint 20](https://www.tecmint.com/install-postgresql-with-pgadmin4-on-linux-mint/)
And,
[Installing pgAdmin 4 on Linux Mint 20 (ulyana)](https://medium.com/@ogunyemijeremiah/installing-pgadmin-4-on-linux-mint-20-ulyana-741b941479c9)
And in Spanish,
[Cómo instalar PostgreSQL con pgAdmin4 en Linux Mint 20](https://es.linux-console.net/?p=2457)
### Error ###
After finishing the process, when trying to run pgAdmin4 for the **first time**, the following error appears:
Traceback (most recent call last):
File "venv/lib/python3.8/site-packages/pgadmin4/pgAdmin4.py", line 87, in
exec(open(file_quote(setup_py), 'r').read())
File "", line 449, in
File "", line 372, in setup_db
[...]
File "/home/mint/pgAdmin4/venv/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py", line 575, in get_options
self._sa.apply_driver_hacks(self._app, sa_url, options)
File "/home/mint/pgAdmin4/venv/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py", line 908, in apply_driver_hacks
sa_url.database = os.path.join(app.root_path, sa_url.database)
AttributeError: can't set attribute
In the **second attempt** to execute pgAdmin4 the error shown is:
Traceback (most recent call last):
File "/home/mint/pgAdmin4/venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/mint/pgAdmin4/venv/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 681, in do_execute
cursor.execute(statement, parameters)
sqlite3.OperationalError: table version already exists
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "venv/lib/python3.8/site-packages/pgadmin4/pgAdmin4.py", line 94, in
app = create_app()
[...]
File "/home/mint/pgAdmin4/venv/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 681, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) table version already exists
[SQL:
CREATE TABLE version (
name VARCHAR(32) NOT NULL,
value INTEGER NOT NULL,
PRIMARY KEY (name)
)
]
(Background on this error at: http://sqlalche.me/e/14/e3q8)
This situation started to occur in March 2021 and as far as I know, it is identical on both Linux Mint 20 and Linux Mint 20.1.
> Note: a more suitable location to put the virtualenv would have been /opt
, but for multiple tests I opted for /home
from a live USB with Linux Mint 20. Then apply the final changes to my Linux Mint 20.1
EspiFreelancer
(1 rep)
Mar 19, 2021, 06:54 PM
• Last activity: Feb 6, 2025, 01:01 PM
1
votes
2
answers
1759
views
How to cache queries in sqlalchemy just like I do it in mysql by using SQL_CACHE keyword?
I want to cache sqlalchemy, I have discovered that using keyword `SQL_CACHE` in mysql can be helpful in caching queries on demand. But how do I do it in sqlalchemy? Is it possible?
I want to cache sqlalchemy, I have discovered that using keyword
SQL_CACHE
in mysql can be helpful in caching queries on demand.
But how do I do it in sqlalchemy?
Is it possible?
Jitin Maherchandani
(145 rep)
Feb 12, 2016, 10:57 AM
• Last activity: Sep 19, 2024, 06:54 PM
0
votes
0
answers
31
views
Should one stick to 1NF (and higher NF) rules for SQL even while using ORMs?
I am currently making a database in MySQL using the SQLAlchemy ORM. I followed the principles of 1NF until this point (atomised Data, no redundancy) but now I am tempted to add a few useful (though technically redundant columns). 1NF would prescribe using a Foreign Key Column in the Children table w...
I am currently making a database in MySQL using the SQLAlchemy ORM. I followed the principles of 1NF until this point (atomised Data, no redundancy) but now I am tempted to add a few useful (though technically redundant columns).
1NF would prescribe using a Foreign Key Column in the Children table which points to an entry in the Parent table. However, for the sake of my front-end tasks, I was also thinking of adding a "Children" column in my Parent table that has the primary keys of all the children to that entry [Note that each parent has multiple children so this "Children" Column in the Parent Table essentially has a list. This will help me as I can simply use "parentXYZ.children" instead of having to query the Children table to get a list of all children to parentXYZ]. The "relationship" feature (paired with "back_populates") in SQLAlchemy even seems to suggest this is optimal.
Will this have any major issues down the line if I can be sure I will be using the same ORM forever?
I have just begun reading up on SQL so apologies if this question is trivial.
Resources I read through already:
1. https://docs.sqlalchemy.org/en/20/orm/basic_relationships.html
2. https://en.wikipedia.org/wiki/First_normal_form
Arya Vishe
(1 rep)
Jun 28, 2024, 11:01 AM
• Last activity: Jun 28, 2024, 12:37 PM
0
votes
1
answers
465
views
PostgreSQL driven by SQLAlchemy - Release Savepoint Idle in Transaction
While running PostgreSQL 13.12 (occurs in several versions of PG11/PG13) using SQLAlchemy 1.3, we are occasionally hitting issues where increased concurrency leaves certain transactions (and their nested transactions) in the "Idle in Transaction" state with the `RELEASE SAVEPOINT ...` query. Looking...
While running PostgreSQL 13.12 (occurs in several versions of PG11/PG13) using SQLAlchemy 1.3, we are occasionally hitting issues where increased concurrency leaves certain transactions (and their nested transactions) in the "Idle in Transaction" state with the
RELEASE SAVEPOINT ...
query.
Looking at the currently running queries, it is not clear why transactions have stopped moving forward. I have also observed this behavior without any hanging locks:
| pid | datname | duration | query | state | application_name | wait_event_type |
| --- | ------- | -------- | ----- | ----- | ---------------- | --------------- |
| 27662 | app | 22:22:37.429569 | select pg_advisory_xact_lock(resource_id) from resource where uuid = '018afcac-a5ab-7a5c-9eb4-2d8b0be4b556'
| active | http.api.1 | Lock |
| 25830 | app | 22:22:29.236398 | RELEASE SAVEPOINT sa_savepoint_5
| idle in transaction | http.api.0 | Client |
| 21490 | app | 22:22:29.015862 | select pg_advisory_xact_lock(resource_id)
from resource
where uuid = '018afcac-a5ab-7a5c-9eb4-2d8b0be4b556'
| active | http.api.0 | Lock |
| 27674 | app | 22:22:27.780581 | RELEASE SAVEPOINT sa_savepoint_3
| idle in transaction | http.api.2 | Client |
| 29120 | app | 22:22:26.053851 | select pg_advisory_xact_lock(resource_id)
from resource
where uuid = '018afcac-a5ab-7a5c-9eb4-2d8b0be4b556'
| active | http.api.2 | Lock |
Any way to debug this? This API call generally works, with the session being cleaned up and does not fail consistently. We are using SQL Alchemy's connection pooling to manage the connections - closed sessions will return the connection to the pool and issue a rollback to clean up the connection (when commit should have committed all other statements).
Justin Lowen
(68 rep)
Oct 8, 2023, 05:16 PM
• Last activity: Oct 9, 2023, 09:40 AM
1
votes
0
answers
2588
views
SQL connection error in python webapp: "pandas.io.sql.DatabaseError: Execution failed on sql"
I have built a Webapp with python flask, that reads and writes to SQL tables. Typically in the morning, I receive an error regarding SQL query execution that looks like this (see logs snippet below). I'm quite new to web development and everything involved, so this is pretty alien to me. From what I...
I have built a Webapp with python flask, that reads and writes to SQL tables.
Typically in the morning, I receive an error regarding SQL query execution that looks like this (see logs snippet below). I'm quite new to web development and everything involved, so this is pretty alien to me.
From what I can understand based on google searches, it's an issue with my SQL connection timing out, and completely stopping. It works again if I restart my app on Azure but I'm hoping someone can help me sort out the root of the issue.
If there are any other things you may need to see to help me out with this issue (such as the structure of my SQL connection script, or more lines from the logs) let me know and I'll add them to the post. Thanks all!
[2021-06-24 08:30:20,133] ERROR in app: Exception on /FUNCTION/ [GET]
2021-06-24T08:30:20.134169815Z Traceback (most recent call last):
2021-06-24T08:30:20.134180016Z File "/tmp/8d9367259a977d960/antenv/lib/python3.7/site-packages/pandas/io/sql.py", line 1725, in execute
2021-06-24T08:30:20.134188617Z cur.execute(*args, **kwargs)
2021-06-24T08:30:20.134196217Z pyodbc.Error: ('08S02', '[08S02] [Microsoft][ODBC Driver 17 for SQL Server]SMux Provider: Physical connection is not usable [xFFFFFFFF]. (-1) (SQLExecDirectW)')
2021-06-24T08:30:20.134215319Z
2021-06-24T08:30:20.134222119Z The above exception was the direct cause of the following exception:
2021-06-24T08:30:20.134229220Z
2021-06-24T08:30:20.134236120Z Traceback (most recent call last):
2021-06-24T08:30:20.134243021Z File "/tmp/8d9367259a977d960/antenv/lib/python3.7/site-packages/flask/app.py", line 2070, in wsgi_app
2021-06-24T08:30:20.134250221Z response = self.full_dispatch_request()
2021-06-24T08:30:20.134257122Z File "/tmp/8d9367259a977d960/antenv/lib/python3.7/site-packages/flask/app.py", line 1515, in full_dispatch_request
2021-06-24T08:30:20.134264322Z rv = self.handle_user_exception(e)
2021-06-24T08:30:20.134271123Z File "/tmp/8d9367259a977d960/antenv/lib/python3.7/site-packages/flask/app.py", line 1513, in full_dispatch_request
2021-06-24T08:30:20.134278424Z rv = self.dispatch_request()
2021-06-24T08:30:20.134285124Z File "/tmp/8d9367259a977d960/antenv/lib/python3.7/site-packages/flask/app.py", line 1499, in dispatch_request
2021-06-24T08:30:20.134292325Z return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
2021-06-24T08:30:20.134299325Z File "/tmp/8d9367259a977d960/app/routes.py", line 1156, in FUNCTION
2021-06-24T08:30:20.134307226Z RESULTS = FUNCTION()
2021-06-24T08:30:20.134314026Z File "/tmp/8d9367259a977d960/models.py", line 403, in FUNCTION
2021-06-24T08:30:20.134320927Z DF = pd.read_sql_query(Query, sql_conn)
2021-06-24T08:30:20.134327427Z File "/tmp/8d9367259a977d960/antenv/lib/python3.7/site-packages/pandas/io/sql.py", line 394, in read_sql_query
2021-06-24T08:30:20.134334828Z chunksize=chunksize,
2021-06-24T08:30:20.134341228Z File "/tmp/8d9367259a977d960/antenv/lib/python3.7/site-packages/pandas/io/sql.py", line 1771, in read_query
2021-06-24T08:30:20.134348429Z cursor = self.execute(*args)
2021-06-24T08:30:20.134368130Z File "/tmp/8d9367259a977d960/antenv/lib/python3.7/site-packages/pandas/io/sql.py", line 1737, in execute
2021-06-24T08:30:20.134375531Z raise ex from exc
2021-06-24T08:30:20.134383232Z pandas.io.sql.DatabaseError: Execution failed on sql 'select * from table': ('08S02', '[08S02] [Microsoft][ODBC Driver 17 for SQL Server]SMux Provider: Physical connection is not usable [xFFFFFFFF]. (-1) (SQLExecDirectW)')
2021-06-24T08:30:20.144119574Z 172.16.2.1 - - [24/Jun/2021:08:30:20 +0000] "GET /FUNCTION/ HTTP/1.1" 500 290 "https://webapp.co.uk/page " "Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.1 Mobile/15E148 Safari/604.1"
2021-06-24T08:30:23.114523917Z [2021-06-24 08:30:23,113] ERROR in app: Exception on /FUNCTION/ID [GET]
2021-06-24T08:30:23.114576221Z Traceback (most recent call last):
2021-06-24T08:30:23.114583522Z File "/tmp/8d9367259a977d960/antenv/lib/python3.7/site-packages/pandas/io/sql.py", line 1725, in execute
2021-06-24T08:30:23.114588322Z cur.execute(*args, **kwargs)
2021-06-24T08:30:23.114592723Z pyodbc.Error: ('08S02', '[08S02] [Microsoft][ODBC Driver 17 for SQL Server]SMux Provider: Physical connection is not usable [xFFFFFFFF]. (-1) (SQLExecDirectW)')
I just checked a bit further back in the logs and I seem to get this similarly themed error as well:
2021-06-23T07:41:28.233313212Z [2021-06-23 07:41:28,228] ERROR in app: Exception on /function/ [GET]
2021-06-23T07:41:28.233318713Z Traceback (most recent call last):
2021-06-23T07:41:28.233323813Z File "/tmp/8d935b633760569e3/antenv/lib/python3.7/site-packages/pandas/io/sql.py", line 1725, in execute
2021-06-23T07:41:28.233329613Z cur.execute(*args, **kwargs)
2021-06-23T07:41:28.233334814Z pyodbc.OperationalError: ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x68 (104) (SQLExecDirectW)')
2021-06-23T07:41:28.233340214Z
2021-06-23T07:41:28.233345215Z During handling of the above exception, another exception occurred:
2021-06-23T07:41:28.233350115Z
2021-06-23T07:41:28.233374117Z Traceback (most recent call last):
2021-06-23T07:41:28.233379417Z File "/tmp/8d935b633760569e3/antenv/lib/python3.7/site-packages/pandas/io/sql.py", line 1729, in execute
2021-06-23T07:41:28.233384918Z self.con.rollback()
2021-06-23T07:41:28.233390018Z pyodbc.OperationalError: ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]Communication link failure (-2147467259) (SQLEndTran)')
2021-06-23T07:41:28.233395418Z
2021-06-23T07:41:28.233400319Z The above exception was the direct cause of the following exception:
2021-06-23T07:41:28.233405119Z
2021-06-23T07:41:28.233409720Z Traceback (most recent call last):
2021-06-23T07:41:28.233414720Z File "/tmp/8d935b633760569e3/antenv/lib/python3.7/site-packages/flask/app.py", line 2070, in wsgi_app
2021-06-23T07:41:28.233420120Z response = self.full_dispatch_request()
2021-06-23T07:41:28.233425121Z File "/tmp/8d935b633760569e3/antenv/lib/python3.7/site-packages/flask/app.py", line 1515, in full_dispatch_request
2021-06-23T07:41:28.233430721Z rv = self.handle_user_exception(e)
2021-06-23T07:41:28.233435722Z File "/tmp/8d935b633760569e3/antenv/lib/python3.7/site-packages/flask/app.py", line 1513, in full_dispatch_request
2021-06-23T07:41:28.233441122Z rv = self.dispatch_request()
2021-06-23T07:41:28.233446022Z File "/tmp/8d935b633760569e3/antenv/lib/python3.7/site-packages/flask/app.py", line 1499, in dispatch_request
2021-06-23T07:41:28.233451123Z return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
2021-06-23T07:41:28.233455823Z File "/tmp/8d935b633760569e3/app/routes.py", line 1139, in function
2021-06-23T07:41:28.233460623Z results= function()
2021-06-23T07:41:28.233466124Z File "/tmp/8d935b633760569e3/models.py", line 403, in function
2021-06-23T07:41:28.233471124Z df = pd.read_sql_query(query, sql_conn)
2021-06-23T07:41:28.233475825Z File "/tmp/8d935b633760569e3/antenv/lib/python3.7/site-packages/pandas/io/sql.py", line 394, in read_sql_query
2021-06-23T07:41:28.233480925Z chunksize=chunksize,
2021-06-23T07:41:28.233485925Z File "/tmp/8d935b633760569e3/antenv/lib/python3.7/site-packages/pandas/io/sql.py", line 1771, in read_query
2021-06-23T07:41:28.233490926Z cursor = self.execute(*args)
2021-06-23T07:41:28.233495426Z File "/tmp/8d935b633760569e3/antenv/lib/python3.7/site-packages/pandas/io/sql.py", line 1734, in execute
2021-06-23T07:41:28.233500327Z raise ex from inner_exc
2021-06-23T07:41:28.233504927Z pandas.io.sql.DatabaseError: Execution failed on sql: select * from [table]
2021-06-23T07:41:28.233509927Z ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x68 (104) (SQLExecDirectW)')
2021-06-23T07:41:28.233514928Z unable to rollback
2021-06-23T07:41:28.236974194Z 172.16.2.1 - - [23/Jun/2021:07:41:28 +0000] "GET /function/ HTTP/1.1" 500 290 "https://webapp.co.uk/page " "Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.1 Mobile/15E148 Safari/604.1"
2021-06-23T07:41:30.460419463Z [2021-06-23 07:41:30,459] ERROR in app: Exception on /form-choice/id[GET]
2021-06-23T07:41:30.460444664Z Traceback (most recent call last):
2021-06-23T07:41:30.460450165Z File "/tmp/8d935b633760569e3/antenv/lib/python3.7/site-packages/pandas/io/sql.py", line 1725, in execute
2021-06-23T07:41:30.460460166Z cur.execute(*args, **kwargs)
2021-06-23T07:41:30.460464066Z pyodbc.Error: ('08S02', '[08S02] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x20 (32) (SQLExecDirectW)')
I've already asked this question over here on stack overflow but no one has replied, so hoping this site would be more useful, as it's more of a db related question.
Liam
(11 rep)
Jun 25, 2021, 09:18 AM
• Last activity: Sep 12, 2023, 08:49 AM
0
votes
1
answers
190
views
Minified vs. pretty XML
I have a text field in a PostgreSQL Database in which XML data is stored. Now I was wondering if it makes a difference to store the XML in minified or in pretty version. According to http://bytesizematters.com the size of the text itself does differ: [![Size comparison of minified vs pretty XML][1]]...
I have a text field in a PostgreSQL Database in which XML data is stored.
Now I was wondering if it makes a difference to store the XML in minified or in pretty version.
According to http://bytesizematters.com the size of the text itself does differ:
Now I'm wondering if that will reflect on database size and performance. Maybe PostgreSQL is already optimizing text fields so much, that it actually doesn't matter.
Unexpectedly, I didn't find anything on that topic on the internet. So if you have some links on that, I'd be very happy to read and learn more about DB optimization.

bechtold
(103 rep)
Aug 8, 2023, 11:02 AM
• Last activity: Aug 8, 2023, 12:19 PM
0
votes
1
answers
67
views
Checking for already present data before commiting?
I'm following a simple youtube course on how to build a FastAPI app with a Postgres db. The Users table has the following schema (using SQLAlchemy): class User(Base): """ Class responsible for the `users` table in the PostgreSQL DB """ __tablename__ = "users" id = Column(Integer, primary_key=True, i...
I'm following a simple youtube course on how to build a FastAPI app with a Postgres db.
The Users table has the following schema (using SQLAlchemy):
class User(Base):
"""
Class responsible for the
users
table in the PostgreSQL DB
"""
__tablename__ = "users"
id = Column(Integer, primary_key=True, index=True, nullable=False)
email = Column(String, unique=True, nullable=False)
password = Column(String, nullable=False)
created_at = Column(TIMESTAMP(timezone=True), nullable=False, server_default=text("now()"))
I've noticed that when I try to create a user (a row in a table) with an already used mail, I get an error, which is expected. Then, if I change the mail to a different one, and commit again the change, all is fine... However, when I check in the table for all the elements, I see that the id column has skipped some values, corresponding to those tries for which I got sqlalchemy.exc.IntegrityError
.
Given this situation, I wonder whether it's better to first check for an already existing mail, and only then try to commit... otherwise, the id column will have jumps in its values.
An old man in the sea.
(117 rep)
May 29, 2023, 08:38 AM
• Last activity: May 30, 2023, 12:18 PM
0
votes
0
answers
463
views
Sudden momentary dips in aws rds free storage space available
I have been experiencing some sudden momentary dips in the available aws rds free storage space available. Once the free space dips it goes into the storage optimization mode. I have a postgres table which is around 100gb with around 280 mill records to which constant updates happen. The table has a...
I have been experiencing some sudden momentary dips in the available aws rds free storage space available. Once the free space dips it goes into the storage optimization mode.
I have a postgres table which is around 100gb with around 280 mill records to which constant updates happen. The table has a toast of around 300gb. I'm not able to think where to even point the issue at while looking at cloud watch. These dips are making the product a lot slower.
I have looked at enhanced monitoring and seen all the different charts and can't see any unusual spikes either with os or db processes in cloudwatch during the time these dips happened.
Below is an attached picture. Please let me know if you guys have any ideas why this might be happening?

dev_998
(1 rep)
Apr 4, 2023, 05:53 PM
2
votes
2
answers
492
views
Operational Error on MySQL AWS RDS Instance
The heading pretty much describes it all, here are some more information on the Amazon RDS I am using. - Engine: MySQL Community 8.0.28 - Class db.r6g.large - Multi-AZ no. The data volume is very small, not even a few megs, as I am just doing some PoC to test the connectivity. - Latency from MySQL c...
The heading pretty much describes it all, here are some more information on the Amazon RDS I am using.
- Engine: MySQL Community 8.0.28
- Class db.r6g.large
- Multi-AZ no. The data volume is very small, not even a few megs, as I am just doing some PoC to test the connectivity.
- Latency from MySQL commandline: 10ms
- Availability zone: ap-southeast-1
- Set up the inbound traffic rule for port 3306 of the relevant security group.
These are the client details
- Ubuntu 22.04
- Python 3.10
- SQL Alchemy 1.4.44
- mysql Ver 8.0.32-0ubuntu0.22.04.2 for Linux on x86_64 ((Ubuntu))
- Geographic location: Singapore
Here is a sample code snippet (hopefully, self-explanatory)
#!/usr/bin/env python3
# encoding: utf-8
uri:str = 'mysql+mysqlconnector://della:random_password@tensorflow-dump.apj-xjge.ap-southeast-1.rds.amazonaws.com:3306/convnet'
with create_engine(url=uri).connect() as out_client: # This line throws error
logging.info(msg=f'Connected to {uri}.')
results.to_sql(name='machine_name', con=out_client,
if_exists='append', index=False)
logging.info(msg=f'Pushed data.')
So, I am encountering the operational error on a sporadic basis, once every 3-4 attempts of running the above code. It is not reproducible, but makes the database unsuitable for production usage. When I do get the error, the flow waits for about 12 minutes at the
command before throwing the error.
Here is the stacktrace.
Traceback (most recent call last):
File "/home/della/supply-chain-dev/clustering_results_postprocess/src/push.py", line 119, in
asyncio.run(main=main())
File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/home/della/supply-chain-dev/clustering_results_postprocess/src/push.py", line 111, in main
with create_engine(url=out_dbase_uri).connect() as out_client:
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3315, in connect
return self._connection_cls(self, close_with_result=close_with_result)
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 96, in __init__
else engine.raw_connection()
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection
return self._wrap_pool_connect(self.pool.connect, _connection)
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect
Connection._handle_dbapi_exception_noconnection(
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2198, in _handle_dbapi_exception_noconnection
util.raise_(
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 210, in raise_
raise exception
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect
return fn()
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 325, in connect
return _ConnectionFairy._checkout(self)
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 491, in checkout
rec = pool._do_get()
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 145, in _do_get
with util.safe_reraise():
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 210, in raise_
raise exception
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 143, in _do_get
return self._create_connection()
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 271, in _create_connection
return _ConnectionRecord(self)
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 386, in __init__
self.__connect()
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 684, in __connect
with util.safe_reraise():
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 210, in raise_
raise exception
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 680, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/engine/create.py", line 578, in connect
return dialect.connect(*cargs, **cparams)
File "/home/della/.local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 598, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/home/della/.local/lib/python3.10/site-packages/mysql/connector/pooling.py", line 293, in connect
return CMySQLConnection(*args, **kwargs)
File "/home/della/.local/lib/python3.10/site-packages/mysql/connector/connection_cext.py", line 118, in __init__
self.connect(**kwargs)
File "/home/della/.local/lib/python3.10/site-packages/mysql/connector/abstracts.py", line 1178, in connect
self._open_connection()
File "/home/della/.local/lib/python3.10/site-packages/mysql/connector/connection_cext.py", line 293, in _open_connection
raise get_mysql_exception(
sqlalchemy.exc.OperationalError: (mysql.connector.errors.OperationalError) 2013 (HY000): Lost connection to MySQL server at 'reading authorization packet', system error: 104
(Background on this error at: https://sqlalche.me/e/14/e3q8)
So I am not sure if it is a
- resource issue (need a machine with more RAM, or more powerful database instance)
- something to do with credentials, connection string, port firewall etc. (in that case, the code would fail with a guarantee)
- merely the internet connection at my work is not reliable (there is no other obvious sign of the connection being snappy, the usual apps, browsing etc. work very well).
When I try with >
command line, the connectivity works very well consistently.
Della
(73 rep)
Feb 28, 2023, 02:27 AM
• Last activity: Mar 15, 2023, 02:15 AM
0
votes
1
answers
217
views
Proper way of constructing database of historical datasets
I've been attempting to learn how to build a database of historical sports statistics using postgresql/sqlalchemy ORM the past few days and could really use some advice on approach. My use case seems to be a bit different from examples I've read through and maybe that's why I'm having trouble. FWIW...
I've been attempting to learn how to build a database of historical sports statistics using postgresql/sqlalchemy ORM the past few days and could really use some advice on approach. My use case seems to be a bit different from examples I've read through and maybe that's why I'm having trouble. FWIW I've worked with this type of data for years in Excel and later python/pandas. I guess I'm just so used to the way of going about things within those constructs that sqlalchemy has my brain all twisted up. Below are some general guidelines/questions and I will add some examples of my mapped classes soon.
1. Mapped classes for schools (~360), their respective conference (~15) and players on their roster (~5000) from 2010-present. Many more beyond that but trying to be somewhat brief.
2. Schools can change conferences and players can change schools in between years. From an implementation standpoint do I need to have separate rows for conferences, schools, players for each year or would that be viewed as redundant?
3. The data that I'm attempting to use comes from multiple sources with different ids that need to somehow "connected" either before adding to the database or preferably within. Below is a general example of data that would be added. I do have some flexibility surrounding what is or isn't inserted after parsing responses from the web but have been trying to keep that to a minimum.
espn = {"name": "Duke", "id": 4930, "venue_id": 5890, "venue_name": "Cameron Indoor Stadium"}
ncaa = {"name": "Duke Blue Devils", "id": 165646, "wins": 24, "losses": 10}
4. When all is said and done I would like to be able to retrieve specific segments of data either as is or compiled. ie. statistical averages for players on Duke roster for games played at their home venue during the 2013 season.
Hopefully this at least makes some sense and it doesn't seem as though I'm looking for someone to drive the car for me; I could just use a few helpful pointers to get me in the right direction. Thanks in advance for any thoughts/suggestions.
Nick
(101 rep)
Mar 10, 2023, 04:09 AM
• Last activity: Mar 10, 2023, 08:54 AM
0
votes
0
answers
205
views
Unable to connect to sql server on Ubuntu after MySQLAlchemy connection
**Context:** I have installed mysql-server on Ubuntu and was able to setup password for root account and also created a few accounts with all privileges. I was able to connect to any user or the root user using the usual ``` mysql -uroot -p ``` I enter my password and I connect to the databases. **I...
**Context:**
I have installed mysql-server on Ubuntu and was able to setup password for root account and also created a few accounts with all privileges. I was able to connect to any user or the root user using the usual
mysql -uroot -p
I enter my password and I connect to the databases.
**Issue:**
I have a python flask application that uses SQLAlchemy to connect to the database using a username that I created, with a password. That user has all privileges. After I ran my application, I am UNABLE to connect to any user account from the terminal but the SQLAlchemy connection works fine via the flask application.
**Fixes tried**
- I tried restarting the mysql service.
- Turned off the flask application.
- I tried restarting the linux host.
I need to fix this issue and be able to connect to the mysql database from terminal as well as from the application (currently only application connects to the database).
Rookie_Coder2318
(1 rep)
Oct 27, 2022, 06:45 AM
5
votes
2
answers
62657
views
How to set a auto-increment value in SQLAlchemy?
This is my code : class csvimportt(Base): __tablename__ = 'csvimportt' #id = Column(INTEGER, primary_key=True, autoincrement=True) aid = Column(INTEGER(unsigned=True, zerofill=True), Sequence('article_aid_seq', start=1001, increment=1), primary_key=True) I want to set a autoincrement value from 1000...
This is my code :
class csvimportt(Base):
__tablename__ = 'csvimportt'
#id = Column(INTEGER, primary_key=True, autoincrement=True)
aid = Column(INTEGER(unsigned=True, zerofill=True),
Sequence('article_aid_seq', start=1001, increment=1),
primary_key=True)
I want to set a autoincrement value from 1000. How do I do it?
Farhaan Patel
(51 rep)
May 25, 2018, 07:21 AM
• Last activity: Sep 1, 2022, 04:34 AM
0
votes
0
answers
140
views
Is there a cross join in these CTEs?
I have a query that is constructed from several CTEs (in order to organize the query). I constructed it in my SQL workbench and was happy with the test results on a smaller dataset, so I converted it to SQLalchemy code for executing it on the production database. Now, SQLalchemy warns me of a cross...
I have a query that is constructed from several CTEs (in order to organize the query). I constructed it in my SQL workbench and was happy with the test results on a smaller dataset, so I converted it to SQLalchemy code for executing it on the production database. Now, SQLalchemy warns me of a cross join:
SELECT statement has a cartesian product between FROM element(s) "vm_per_bus" and FROM element "voltages". Apply join condition(s) between each element to resolve.
In my query (see below), this happens in the last CTE. My question is now twofold:
1. Is there really an accidential cartesian join?
2. How do I join the two CTEs?
This is the query:
WITH voltages AS
(SELECT muscle_actions.id AS id,
regexp_replace(CAST(experiment_runs.document_json['schedule_config']['phase_0']['environments']['environment']['params']['reward']['name'] AS TEXT), '^[^:]+:([^"]+)"', '\1') AS reward_function,
agents.uid AS agent,
jsonb_array_elements(jsonb_path_query_array(CAST(muscle_actions.sensor_readings AS JSONB), '$.*.value')) AS vm
FROM muscle_actions
JOIN agents ON agents.id = muscle_actions.agent_id
JOIN experiment_run_phases ON experiment_run_phases.id = agents.experiment_run_phase_id
JOIN experiment_run_instances ON experiment_run_instances.id = experiment_run_phases.experiment_run_instance_id
JOIN experiment_runs ON experiment_runs.id = experiment_run_instances.experiment_run_id
WHERE experiment_runs.id IN (8,
11,
19,
25,
29,
33,
37,
41,
45,
49,
53,
57,
61,
65,
69,
73,
77,
81,
85,
89,
93,
97,
120,
122,
137,
127,
132,
142,
147,
152,
157,
162,
199,
204,
209,
214,
219,
224,
229)
AND experiment_run_phases.uid = 'phase_1'),
vm_per_bus AS
(SELECT voltages.id AS id,
voltages.reward_function AS reward_function,
voltages.agent AS agent,
voltages.vm AS vm,
abs(1.0 - CAST(voltages.vm AS REAL)) AS vm_deviation,
abs(1.0 - CAST(voltages.vm AS REAL)) >= 0.15 AS vm_violation
FROM voltages),
vm AS
(SELECT vm_per_bus.id AS id,
max(voltages.reward_function) AS reward_function,
max(voltages.agent) AS agent,
array_agg(vm_per_bus.vm) AS array_agg_1,
max(vm_per_bus.vm_deviation) AS max_vm_deviation,
sum(CAST(vm_per_bus.vm_violation AS INTEGER)) AS num_vm_violations
FROM vm_per_bus,
voltages
GROUP BY vm_per_bus.id)
SELECT vm.reward_function,
vm.agent,
max(vm.max_vm_deviation) AS max_abs_violation,
sum(vm.num_vm_violations) AS num_violations
FROM vm
GROUP BY vm.reward_function,
vm.agent
Technaton
(171 rep)
May 27, 2022, 08:06 PM
0
votes
1
answers
454
views
Use of MySQL trigger to insert row's week number
I have a table of orders in a MariaDB database that currently has a datetime column. For the dashboard I am building for this database, I have a statistic I would like to display which is "Orders This Week." My question is this: Can a trigger be used to take the datetime from a just-inserted order a...
I have a table of orders in a MariaDB database that currently has a datetime column. For the dashboard I am building for this database, I have a statistic I would like to display which is "Orders This Week."
My question is this: Can a trigger be used to take the datetime from a just-inserted order and insert the week number of that date into another column? If so, can I also use SQLAlchemy to match the order's week number to Python's
strftime('%U')
to see orders in a given week?
smallpants
(101 rep)
Mar 30, 2017, 03:57 PM
• Last activity: Mar 16, 2022, 06:01 PM
Showing page 1 of 20 total questions