Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
0
answers
24
views
Drizzle migrations don't seem to be repeatable
In our node.js backend we are using drizzle ORM with postgres, but our migrations are failing for some of our customers db's but not for all of them. Also, when I create a new empty database and do the migrations, it first does so successfully, but afterwards it starts failing too.. [
...
--> statement-breakpoint
CREATE TABLE IF NOT EXISTS "employees" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"user_id" uuid,
"first_name" varchar(255) NOT NULL,
"last_name" varchar(255) NOT NULL,
"company_email" text,
"telephone" varchar(255),
"bank_account_number" varchar(255),
"extra_info" text,
"date_employment_start" timestamp,
"date_employment_end" timestamp,
"company_id" uuid NOT NULL,
"company_function_id" uuid,
....
CONSTRAINT "employees_id_unique" UNIQUE("id")
);
--> statement-breakpoint
...
--> statement-breakpoint
CREATE TABLE IF NOT EXISTS "company_functions" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"name" varchar(255) NOT NULL,
"description" text,
"department_id" uuid,
"created_at" timestamp DEFAULT now(),
"updated_at" timestamp DEFAULT now(),
"company_id" uuid NOT NULL,
CONSTRAINT "company_functions_id_unique" UNIQUE("id"),
CONSTRAINT "unique_company_function_name" UNIQUE("company_id","name","department_id")
);
--> statement-breakpoint
...
--> statement-breakpoint
DO $$ BEGIN
ALTER TABLE "employees" ADD CONSTRAINT "employees_company_function_id_company_functions_id_fk" FOREIGN KEY ("company_function_id") REFERENCES "public"."company_functions"("id") ON DELETE cascade ON UPDATE no action;
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
--> statement-breakpoint
...
The second part is where the migration fails:

DO $$
BEGIN
ALTER TABLE "employees" DROP COLUMN IF EXISTS "company_function_id";
EXCEPTION
WHEN undefined_column THEN NULL;
END $$;
But what I find strange is that these sql statements should not be able to run twice.
Because the first time it makes sense.
But the second time it will not recreate the table (due to the if not exists part).
So company_function_id will be gone.
but it does tries to add a foreign key constraint on that field that does not exist anymore.
How should this be compatible?
michielQaro
(1 rep)
Aug 4, 2025, 02:01 PM
-1
votes
0
answers
78
views
OPENJSON is evaluated as many times as filtered row count
I have some query generated by EF Core 9. EF uses `OPENJSON` function on MS SQL SERVER where you use `Contains` in EF Core query. With generated query where OPENJSON is used in SUBQUERY The SQL Server evaluates it as many times as joined/filtered rows. That is very bad. A query from the example fini...
I have some query generated by EF Core 9. EF uses
OPENJSON
function on MS SQL SERVER where you use Contains
in EF Core query.
With generated query where OPENJSON is used in SUBQUERY The SQL Server evaluates it as many times as joined/filtered rows. That is very bad. A query from the example finishes in almost **4 seconds**. When I replace it with WHERE IN
then it finishes in **~50 ms**.
Unfortunately sometimes that OPENJSON
contains a lot of items so I can not replace that permanently.
Is there any solution to tell SQL Server to evaluate this OPENJSON
only once?
-- or --
Is it a BUG in MS SQL Server 2019?

dariol
(99 rep)
Jul 14, 2025, 12:17 PM
1
votes
1
answers
157
views
What and where to index, rapid increasing table postgres
I work as a software engineer with python and django. Currently I am struggling with a design choice made before my time. We have a transaction table that logs all customer activity. Due to the success of the platform the data in the table is rapidly increasing. I have issues getting the query time...
I work as a software engineer with python and django. Currently I am struggling with a design choice made before my time. We have a transaction table that logs all customer activity. Due to the success of the platform the data in the table is rapidly increasing. I have issues getting the query time to a manageable size. I have this example of a query that runs extremely slow. I guess some good indexing could do the job but I don't really know where to start. I would love some tips on how to help myself (any quality posts/books or other resources) or how to solve this problem. If somehow possible i would like to not make manual queries and just use the ORM. The * in the select i placed to make the query more readable.
SELECT *
FROM "customer_customer"
INNER JOIN "customer_transaction"
ON ("customer_customer"."id" = "customer_transaction"."customer_id")
WHERE ("customer_customer"."status" = 1
AND NOT ("customer_customer"."id" IN ( SELECT U1."customer_id" AS Col1
FROM "customer_transaction" U1
WHERE U1."transaction_type" IN (30, 14)
)
)
AND "customer_transaction"."date" >= '2018-05-11 11:01:43.598530+02:00')
As Asked in the comments here are additional infos:
Currently I am running the commands on my local computer. The Query is generated by the orm.
Create of the customer table:
--
-- PostgreSQL database dump
--
-- Dumped from database version 9.6.0
-- Dumped by pg_dump version 9.6.0
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
--
-- Name: customer_customer; Type: TABLE; Schema: public; Owner: postgres
--
CREATE TABLE customer_customer (
id integer NOT NULL,
firstname character varying(63),
phone character varying(31),
terms_accepted boolean NOT NULL,
user_id integer,
cashed_vip_points integer NOT NULL,
vip_points integer NOT NULL,
receive_mail boolean NOT NULL,
mailchimp_email character varying(254),
image character varying(100),
image_thumb character varying(100),
favorite_hash character varying(31),
has_accepted_favorite_hint boolean NOT NULL,
address_id integer,
blog_url character varying(200),
instagram_username character varying(200),
overrule_default_vip_points integer,
status integer NOT NULL,
signature boolean,
signature_date date,
store_id_id integer,
shopping_mail boolean NOT NULL,
CONSTRAINT customer_customer_overrule_default_vip_points_check CHECK ((overrule_default_vip_points >= 0))
);
ALTER TABLE customer_customer OWNER TO postgres;
--
-- Name: customer_customer_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres
--
CREATE SEQUENCE customer_customer_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ALTER TABLE customer_customer_id_seq OWNER TO postgres;
--
-- Name: customer_customer_id_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner: postgres
--
ALTER SEQUENCE customer_customer_id_seq OWNED BY customer_customer.id;
--
-- Name: customer_customer id; Type: DEFAULT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_customer ALTER COLUMN id SET DEFAULT nextval('customer_customer_id_seq'::regclass);
--
-- Name: customer_customer customer_customer_address_id_key; Type: CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_customer
ADD CONSTRAINT customer_customer_address_id_key UNIQUE (address_id);
--
-- Name: customer_customer customer_customer_favorite_hash_key; Type: CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_customer
ADD CONSTRAINT customer_customer_favorite_hash_key UNIQUE (favorite_hash);
--
-- Name: customer_customer customer_customer_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_customer
ADD CONSTRAINT customer_customer_pkey PRIMARY KEY (id);
--
-- Name: customer_customer customer_customer_user_id_key; Type: CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_customer
ADD CONSTRAINT customer_customer_user_id_key UNIQUE (user_id);
--
-- Name: customer_customer_211f6852; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_customer_211f6852 ON customer_customer USING btree (store_id_id);
--
-- Name: customer_customer customer_custo_address_id_41aab9497590bc7_fk_address_address_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_customer
ADD CONSTRAINT customer_custo_address_id_41aab9497590bc7_fk_address_address_id FOREIGN KEY (address_id) REFERENCES address_address(id) DEFERRABLE INITIALLY DEFERRED;
--
-- Name: customer_customer customer_custom_store_id_id_67b8071f917b6245_fk_stores_store_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_customer
ADD CONSTRAINT customer_custom_store_id_id_67b8071f917b6245_fk_stores_store_id FOREIGN KEY (store_id_id) REFERENCES stores_store(id) DEFERRABLE INITIALLY DEFERRED;
--
-- Name: customer_customer customer_customer_user_id_482ced6557101913_fk_auth_user_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_customer
ADD CONSTRAINT customer_customer_user_id_482ced6557101913_fk_auth_user_id FOREIGN KEY (user_id) REFERENCES auth_user(id) DEFERRABLE INITIALLY DEFERRED;
--
-- PostgreSQL database dump complete
--
And of the transaction table:
--
-- PostgreSQL database dump
--
-- Dumped from database version 9.6.0
-- Dumped by pg_dump version 9.6.0
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
--
-- Name: customer_transaction; Type: TABLE; Schema: public; Owner: postgres
--
CREATE TABLE customer_transaction (
id integer NOT NULL,
points integer,
transaction_type integer NOT NULL,
customer_id integer NOT NULL,
date timestamp with time zone NOT NULL,
product_id integer,
fotostream_entry_id integer,
acommit_transaction_id character varying(36),
amount numeric(6,2),
has_storno_id integer,
merged_customernumber_id integer,
message_de character varying(255),
message_fr character varying(255),
points_befor_migration integer,
store_id integer,
storno_from_id integer,
user_id integer,
_transaction_type_messages_id integer,
aac_import_row character varying(5000)
);
ALTER TABLE customer_transaction OWNER TO postgres;
--
-- Name: customer_transaction_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres
--
CREATE SEQUENCE customer_transaction_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ALTER TABLE customer_transaction_id_seq OWNER TO postgres;
--
-- Name: customer_transaction_id_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner: postgres
--
ALTER SEQUENCE customer_transaction_id_seq OWNED BY customer_transaction.id;
--
-- Name: customer_transaction id; Type: DEFAULT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_transaction ALTER COLUMN id SET DEFAULT nextval('customer_transaction_id_seq'::regclass);
--
-- Name: customer_transaction customer_transaction_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_transaction
ADD CONSTRAINT customer_transaction_pkey PRIMARY KEY (id);
--
-- Name: customer_transacti_acommit_transaction_id_71030b1b69b97709_like; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transacti_acommit_transaction_id_71030b1b69b97709_like ON customer_transaction USING btree (acommit_transaction_id varchar_pattern_ops);
--
-- Name: customer_transacti_acommit_transaction_id_71030b1b69b97709_uniq; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transacti_acommit_transaction_id_71030b1b69b97709_uniq ON customer_transaction USING btree (acommit_transaction_id);
--
-- Name: customer_transaction_7473547c; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transaction_7473547c ON customer_transaction USING btree (store_id);
--
-- Name: customer_transaction_928570bc; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transaction_928570bc ON customer_transaction USING btree (merged_customernumber_id);
--
-- Name: customer_transaction_9524d7ad; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transaction_9524d7ad ON customer_transaction USING btree (_transaction_type_messages_id);
--
-- Name: customer_transaction_9bea82de; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transaction_9bea82de ON customer_transaction USING btree (product_id);
--
-- Name: customer_transaction_b65a298f; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transaction_b65a298f ON customer_transaction USING btree (fotostream_entry_id);
--
-- Name: customer_transaction_cb24373b; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transaction_cb24373b ON customer_transaction USING btree (customer_id);
--
-- Name: customer_transaction_d9b62ea2; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transaction_d9b62ea2 ON customer_transaction USING btree (storno_from_id);
--
-- Name: customer_transaction_date_bd33b3ac; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transaction_date_bd33b3ac ON customer_transaction USING btree (date);
--
-- Name: customer_transaction_e8701ad4; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transaction_e8701ad4 ON customer_transaction USING btree (user_id);
--
-- Name: customer_transaction_f2c0da2f; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transaction_f2c0da2f ON customer_transaction USING btree (has_storno_id);
--
-- Name: customer_transaction_transaction_type_36582b63; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transaction_transaction_type_36582b63 ON customer_transaction USING btree (transaction_type);
--
-- Name: customer_transaction_transaction_type_custome_3619995d_idx; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transaction_transaction_type_custome_3619995d_idx ON customer_transaction USING btree (transaction_type, customer_id, date);
--
-- Name: customer_transaction_transaction_type_customer_id_3eb6f7d0_idx; Type: INDEX; Schema: public; Owner: postgres
--
CREATE INDEX customer_transaction_transaction_type_customer_id_3eb6f7d0_idx ON customer_transaction USING btree (transaction_type, customer_id);
--
-- Name: customer_transaction D4d691342aa107b3b4fb5a167936d123; Type: FK CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_transaction
ADD CONSTRAINT "D4d691342aa107b3b4fb5a167936d123" FOREIGN KEY (merged_customernumber_id) REFERENCES customer_customernumber(id) DEFERRABLE INITIALLY DEFERRED;
--
-- Name: customer_transaction D6e0c79ad7a40ca02054ed28c4d6999c; Type: FK CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_transaction
ADD CONSTRAINT "D6e0c79ad7a40ca02054ed28c4d6999c" FOREIGN KEY (_transaction_type_messages_id) REFERENCES customer_transactiontype(id) DEFERRABLE INITIALLY DEFERRED;
--
-- Name: customer_transaction D9460b882ac4401f6adf8077475229ed; Type: FK CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_transaction
ADD CONSTRAINT "D9460b882ac4401f6adf8077475229ed" FOREIGN KEY (fotostream_entry_id) REFERENCES fotostream_fotostreamentry(id) DEFERRABLE INITIALLY DEFERRED;
--
-- Name: customer_transaction cust_storno_from_id_6a48315f632674fa_fk_customer_transaction_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_transaction
ADD CONSTRAINT cust_storno_from_id_6a48315f632674fa_fk_customer_transaction_id FOREIGN KEY (storno_from_id) REFERENCES customer_transaction(id) DEFERRABLE INITIALLY DEFERRED;
--
-- Name: customer_transaction custo_has_storno_id_116b248645f7fd59_fk_customer_transaction_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_transaction
ADD CONSTRAINT custo_has_storno_id_116b248645f7fd59_fk_customer_transaction_id FOREIGN KEY (has_storno_id) REFERENCES customer_transaction(id) DEFERRABLE INITIALLY DEFERRED;
--
-- Name: customer_transaction customer_product_id_428b30409c797b6b_fk_lookbook_productbase_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_transaction
ADD CONSTRAINT customer_product_id_428b30409c797b6b_fk_lookbook_productbase_id FOREIGN KEY (product_id) REFERENCES lookbook_productbase(id) DEFERRABLE INITIALLY DEFERRED;
--
-- Name: customer_transaction customer_t_customer_id_7962b09af88fe147_fk_customer_customer_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_transaction
ADD CONSTRAINT customer_t_customer_id_7962b09af88fe147_fk_customer_customer_id FOREIGN KEY (customer_id) REFERENCES customer_customer(id) DEFERRABLE INITIALLY DEFERRED;
--
-- Name: customer_transaction customer_transacti_store_id_4014f4c86692b54b_fk_stores_store_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_transaction
ADD CONSTRAINT customer_transacti_store_id_4014f4c86692b54b_fk_stores_store_id FOREIGN KEY (store_id) REFERENCES stores_store(id) DEFERRABLE INITIALLY DEFERRED;
--
-- Name: customer_transaction customer_transaction_user_id_3497aca364c6a472_fk_auth_user_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres
--
ALTER TABLE ONLY customer_transaction
ADD CONSTRAINT customer_transaction_user_id_3497aca364c6a472_fk_auth_user_id FOREIGN KEY (user_id) REFERENCES auth_user(id) DEFERRABLE INITIALLY DEFERRED;
--
-- PostgreSQL database dump complete
--
And here is the execution plan:
Merge Join (cost=14112.60..11422372541.38 rows=505000 width=671) (actual time=1351.692..16713328.367 rows=146436 loops=1)
Merge Cond: (customer_customer.id = customer_transaction.customer_id)
Buffers: shared hit=1922626 read=720356, temp read=98686908 written=1364
-> Index Scan using customer_customer_pkey on customer_customer (cost=14096.99..11422109016.84 rows=81175 width=121) (actual time=1342.257..16649665.313 rows=35553 loops=1)
Filter: ((status = 1) AND (NOT (SubPlan 1)))
Rows Removed by Filter: 309213
Buffers: shared hit=156156 read=72783, temp read=98686908 written=1364
SubPlan 1
-> Materialize (cost=14096.57..78342.29 rows=805641 width=4) (actual time=0.007..52.406 rows=356853 loops=161642)
Buffers: shared hit=1667 read=25275, temp read=98686908 written=1364
-> Bitmap Heap Scan on customer_transaction u1 (cost=14096.57..71166.08 rows=805641 width=4) (actual time=147.297..485.822 rows=797943 loops=1)
Recheck Cond: (transaction_type = ANY ('{30,14}'::integer[]))
Heap Blocks: exact=24756
Buffers: shared hit=1667 read=25275
-> Bitmap Index Scan on customer_transaction_transaction_type_customer_id_3eb6f7d0_idx (cost=0.00..13895.16 rows=805641 width=0) (actual time=140.944..140.944 rows=797943 loops=1)
Index Cond: (transaction_type = ANY ('{30,14}'::integer[]))
Buffers: shared hit=1 read=2185
-> Index Scan using customer_transaction_cb24373b on customer_transaction (cost=0.43..252918.54 rows=2144835 width=550) (actual time=0.039..63012.608 rows=2143881 loops=1)
Filter: (date >= '2018-05-11 11:01:43.59853+02'::timestamp with time zone)
Rows Removed by Filter: 1048039
Buffers: shared hit=1766470 read=647573
Planning time: 16.013 ms
Execution time: 16713362.490 ms
Phteven
(111 rep)
Nov 7, 2018, 10:38 AM
• Last activity: Jul 12, 2025, 03:03 PM
0
votes
0
answers
16
views
Can we update an entity in @PrePersist of another entity
Currently I have two entities (Employee, Company) and Employee has many to one relation to company. Company has a field lastPersonJoined and on each Employee insertion, lastPersonJoined in Company entity is updated . This can be implemented in service layer as well but i really want to understand ca...
Currently I have two entities (Employee, Company) and Employee has many to one relation to company. Company has a field lastPersonJoined and on each Employee insertion, lastPersonJoined in Company entity is updated . This can be implemented in service layer as well but i really want to understand can we do this in @PrePersist or @PostPersist?
When i do the before i didn't face any errors but its not commiting to DB. Correct me if i am wrong i think since company is not managed entity its not saving to DB.
@Entity
public class Employee{
...
@column(name)
public String name;
@ManyToOne(fetch = FetchType.LAZY, optional = false, cascade = {CascadeType.MERGE, CascadeType.PERSIST, CascadeType.REFRESH})
@JoinColumn(name = "id", referencedColumnName = "id", nullable = false)
@ToString.Exclude
private Company company;
@PrePersist
public void prePersist(){
company.setLastManJoined(name);
}
}
gcs
(1 rep)
May 14, 2025, 04:44 PM
• Last activity: May 14, 2025, 04:50 PM
0
votes
1
answers
1204
views
Using a full text index with PostgreSQL and Doctrine
I received the following error: SQLSTATE[54000]: Program limit exceeded: 7 ERROR: index row size 2728 exceeds btree version 4 maximum 2704 for index "fulltext" DETAIL: Index row references tuple (1,33) in relation "manual". HINT: Values larger than 1/3 of a buffer page cannot be indexed. Consider a...
I received the following error:
SQLSTATE: Program limit exceeded: 7 ERROR: index row size 2728 exceeds btree version 4 maximum 2704 for index "fulltext"
DETAIL: Index row references tuple (1,33) in relation "manual".
HINT: Values larger than 1/3 of a buffer page cannot be indexed.
Consider a function index of an MD5 hash of the value, or use full text indexing. (54000) class: PDOException
The table in question is:
\d manual
Table "public.manual"
Column | Type | Collation | Nullable | Default
--------------+-----------------------+-----------+----------+------------------------------------
id | integer | | not null | nextval('manual_id_seq'::regclass)
parent_id | integer | | |
name | character varying(45) | | not null |
content | text | | |
keywords | text | | |
listorder | smallint | | |
display_list | boolean | | |
Indexes:
"manual_pkey" PRIMARY KEY, btree (id)
"unique_name_manual" UNIQUE, btree (name)
"fulltext" btree (keywords, name, content)
"idx_10dbbec4727aca70" btree (parent_id)
Foreign-key constraints:
"fk_10dbbec4727aca70" FOREIGN KEY (parent_id) REFERENCES manual(id)
Referenced by:
TABLE "manual" CONSTRAINT "fk_10dbbec4727aca70" FOREIGN KEY (parent_id) REFERENCES manual(id)
I thought I had a full text index per my Doctrine config ``, but it appears I don't. Am I doing something wrong or is a Doctrine deficiency? Recommended solutions?
user1032531
(153 rep)
May 12, 2020, 12:50 PM
• Last activity: Apr 20, 2025, 09:03 PM
0
votes
1
answers
28
views
Defining hasMany self-associations in sequelize v7
``` export default class Folder extends Model , InferCreationAttributes > { @Attribute(DataTypes.INTEGER) @PrimaryKey @AutoIncrement declare id: CreationOptional ; @Attribute(DataTypes.STRING) @NotNull declare title: string; @Attribute(DataTypes.INTEGER) declare parentId: number; declare parent?: No...
export default class Folder extends Model, InferCreationAttributes> {
@Attribute(DataTypes.INTEGER)
@PrimaryKey
@AutoIncrement
declare id: CreationOptional;
@Attribute(DataTypes.STRING)
@NotNull
declare title: string;
@Attribute(DataTypes.INTEGER)
declare parentId: number;
declare parent?: NonAttribute;
declare children?: NonAttribute;
@HasMany(() => Folder, {
// as: 'children',
foreignKey: 'parentId',
inverse: {
as: 'parent'
},
})
For the HasMany association, if I don't include as: 'children'
property, I get the following errors from sequelize :
- SequelizeAssociationError: Defining HasMany association "parent" from _Folder to _Folder failed
- Caused by: SequelizeAssociationError: Both options "as" and "inverse.as" must be defined for hasMany self-associations, and their value must be different.
However, if I include the as: 'children'
property, I get the following errors from typescript & sequelize :
- Object literal may only specify known properties, and 'as' does not exist in type 'Omit, "as">'.ts(2353)
- Error: The "as" option is not allowed when using association decorators. The name of the decorated field is used as the association name.
How do I resolve this & get the self-association to work ?
Narayan Waraich
(1 rep)
Jul 10, 2024, 12:44 AM
• Last activity: Jan 3, 2025, 04:27 PM
0
votes
0
answers
49
views
Eager loading relational data without including link table data using KnexJS or TypeORM
I have **Table A** and **Table B** in my Postgres DB. The relations are built using link tables. **Example**: ***ab_link*** table record contains ***a_id*** and ***b_id***. What I am trying to do is get the ObjectA from table A with related ObjectB nested inside ObjectA. When I am querying the data...
I have **Table A** and **Table B** in my Postgres DB. The relations are built using link tables.
**Example**: ***ab_link*** table record contains ***a_id*** and ***b_id***.
What I am trying to do is get the ObjectA from table A with related ObjectB nested inside ObjectA.
When I am querying the data the ab_link table data is also added.
I have seen examples where we manually process the data after querying and I am trying to avoid that because my data relations are complex and nested multi-levels.
How can we get the A and B data without the ab_link recorded added to the queried data using **KnexJS**/**TypeORM**?
// USING KNEXJS
const a = await this.db('a')
.innerJoin('ab_links', 'a.id', 'ab_links.a_id')
.innerJoin('b', 'ab_links.b_id', 'b.id')
.where('a.id', id)
.then((rows) => {
return rows
})
.catch((error) => {
console.error(error);
});
Output a: {
id: 2,
name: 'Test',
b_id: 2
}
// USING TYPEORM
const req = await (await this.aRepository.queryBuilder(
a
))
.leftJoinAndSelect('a.ab_links', 'abl')
.leftJoinAndSelect('abl.b', 'b')
.where({ id: id })
.getOne();
Output: {
id:1,
name: 'test',
ab_links: {
id:4,
a_id:1,
b_id:2,
b: {
id:2,
name: 'test2'
}
}
}
JeffinJ
(101 rep)
Nov 2, 2023, 01:31 PM
• Last activity: Nov 2, 2023, 02:13 PM
0
votes
0
answers
33
views
Database Design Suggestion for Ordering System
This migration was made using Prisma ORM and using MYSQL as the main database. This schema is exclusively made for an ordering system I would just like to ask for suggestions on whether or not the relations/fields within these tables are necessary and if there are any performance issues with the way...
This migration was made using Prisma ORM and using MYSQL as the main database. This schema is exclusively made for an ordering system
I would just like to ask for suggestions on whether or not the relations/fields within these tables are necessary and if there are any performance issues with the way I set up the constraints, indexes, etc. I also didn't bother to remove the comments as it might also help with why I inputted certain things etc. I am also not fully sure about the way I've set up roles, and permission for setting up administrator roles for users. I would love to hear suggestions/changes/additions/etc.
Prisma syntax isn't fully supported by code blocks on this website. You may use this for syntax highlighting. [Prisma Playground](https://playground.prisma.io/)
datasource db {
provider = "mysql"
url = env("DATABASE_URL")
relationMode = "prisma"
}
generator client {
provider = "prisma-client-js"
}
// This model will define the role of a user, i.e admin, customer, etc
model Role {
id Int @id @default(autoincrement())
name String @unique // A role should have a unique name. i.e admin, customer
permissions Permission[] // A role can have many permissions
users User[] // A role can have many users
}
//
model Permission {
id Int @id @default(autoincrement())
name String @unique // A permission should have a unique name. i.e create, read, update, delete, etc
roles Role[] // A permission can have many roles
}
model User {
id Int @id @default(autoincrement())
username String @unique
email String @unique
emailVerified DateTime?
phoneNumber String?
hashedPassword String
image String?
roleId Int?
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
role Role? @relation(fields: [roleId], references: [id]) // A user can have one role
cart Cart[]
orders Order[] // A user can have many orders
@@index([roleId], name: "role_id")
}
model Cart {
id Int @id @default(autoincrement())
userId Int // A cart is associated with a user
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
cartItems CartItem[] // A cart can have many items
users User @relation(fields: [userId], references: [id]) // A cart is only associated with one user
@@unique([userId], name: "unique_user_id") // A user can only have one cart, reenforced by the unique constraint
@@index([userId], name: "user_id")
}
model CartItem {
id Int @id @default(autoincrement())
quantity Int
cartId Int
productId Int
product Product @relation(fields: [productId], references: [id]) // A cart item is associated with a product
cart Cart @relation(fields: [cartId], references: [id]) // A cart item is associated with a cart
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
@@index([cartId], name: "cart_items_cart_id")
@@index([productId], name: "cart_items_product_id")
}
model Category {
id Int @id @default(autoincrement())
name String @unique(map: "unique_category_name")
products Product[] // A category can have many products
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
@@unique([name], name: "unique_category_name") // A category should have a unique name
}
// Categories might be associated with different types, i.e drinks and foods product have similar categories, but overall have different properties associated with them, not important for now
// This model will represent the overall order of a user and its information
model Order {
id Int @id @default(autoincrement())
userId Int
orderStatus String
paymentMethod String?
transactionId String?
// shippingAddress String?
totalAmount Float?
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
transaction Transaction?
isPaid Boolean @default(false)
orderItems OrderItem[] // An order can have many items
user User @relation(fields: [userId], references: [id]) // An order is associated with a user
@@index([userId], name: "user_id")
}
// This model will represent each individual item in an order
model OrderItem {
id Int @id @default(autoincrement())
orderId Int
productId Int
quantity Int
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
product Product @relation(fields: [productId], references: [id]) // An order item is associated with a product
order Order @relation(fields: [orderId], references: [id], map: "order_items_order_id") // An order item is associated with an order
@@index([orderId], name: "order_items_order_id")
@@index([productId], name: "order_items_product_id")
}
model Product {
id Int @id @default(autoincrement())
name String
description String
price Float
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
images Image[]
inventory Inventory[]
cartItems CartItem[] // A product can have many cart items
orderItems OrderItem[] // A product can have many order items
category Category? @relation(fields: [categoryId], references: [id]) // A product is associated with a category
categoryId Int? // Foreign Key
@@index([categoryId])
}
// This model will represent the images associated with a product
model Image {
id Int @id @default(autoincrement())
url String @unique(map: "unique_image_url")
productId Int
product Product @relation(fields: [productId], references: [id], map: "image_product_id") // An image is associated with a product
@@index([productId], name: "image_product_id")
}
model Inventory {
id Int @id @default(autoincrement())
productId Int // Foreign Key
quantity Int
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
product Product @relation(fields: [productId], references: [id], map: "inventory_product_id") // An inventory is associated with a product
@@index([productId], name: "inventory_product_id")
}
// This model will represent the transaction associated with an order
model Transaction {
id Int @id @default(autoincrement())
transactionId String
orderId Int
totalAmount Float
transactionStatus String
paymentMethod String?
order Order @relation(fields: [orderId], references: [id])
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
@@unique([orderId], name: "unique_order_id")
@@index([orderId], name: "order_id")
}
and per request here is the SQL queries that was automatically generated by the Prisma ORM
CREATE TABLE _PermissionToRole
(
A
int NOT NULL,
B
int NOT NULL,
UNIQUE KEY _PermissionToRole_AB_unique
(A
, B
),
KEY _PermissionToRole_B_index
(B
)
)
CREATE TABLE Cart
(
id
int NOT NULL AUTO_INCREMENT,
userId
int NOT NULL,
createdAt
datetime(3) NOT NULL DEFAULT current_timestamp(3),
updatedAt
datetime(3) NOT NULL,
PRIMARY KEY (id
),
UNIQUE KEY Cart_userId_key
(userId
),
KEY user_id
(userId
)
)
CREATE TABLE CartItem
(
id
int NOT NULL AUTO_INCREMENT,
quantity
int NOT NULL,
cartId
int NOT NULL,
productId
int NOT NULL,
createdAt
datetime(3) NOT NULL DEFAULT current_timestamp(3),
updatedAt
datetime(3) NOT NULL,
PRIMARY KEY (id
),
KEY cart_items_cart_id
(cartId
),
KEY cart_items_product_id
(productId
)
)
CREATE TABLE Category
(
id
int NOT NULL AUTO_INCREMENT,
name
varchar(191) NOT NULL,
createdAt
datetime(3) NOT NULL DEFAULT current_timestamp(3),
updatedAt
datetime(3) NOT NULL,
PRIMARY KEY (id
),
UNIQUE KEY unique_category_name
(name
),
UNIQUE KEY Category_name_key
(name
)
)
CREATE TABLE Image
(
id
int NOT NULL AUTO_INCREMENT,
url
varchar(191) NOT NULL,
productId
int NOT NULL,
PRIMARY KEY (id
),
UNIQUE KEY unique_image_url
(url
),
KEY image_product_id
(productId
)
)
CREATE TABLE Inventory
(
id
int NOT NULL AUTO_INCREMENT,
productId
int NOT NULL,
quantity
int NOT NULL,
createdAt
datetime(3) NOT NULL DEFAULT current_timestamp(3),
updatedAt
datetime(3) NOT NULL,
PRIMARY KEY (id
),
KEY inventory_product_id
(productId
)
)
CREATE TABLE Order
(
id
int NOT NULL AUTO_INCREMENT,
userId
int NOT NULL,
orderStatus
varchar(191) NOT NULL,
paymentMethod
varchar(191),
transactionId
varchar(191),
totalAmount
double,
createdAt
datetime(3) NOT NULL DEFAULT current_timestamp(3),
updatedAt
datetime(3) NOT NULL,
isPaid
tinyint(1) NOT NULL DEFAULT '0',
PRIMARY KEY (id
),
KEY user_id
(userId
)
)
CREATE TABLE OrderItem
(
id
int NOT NULL AUTO_INCREMENT,
orderId
int NOT NULL,
productId
int NOT NULL,
quantity
int NOT NULL,
createdAt
datetime(3) NOT NULL DEFAULT current_timestamp(3),
updatedAt
datetime(3) NOT NULL,
PRIMARY KEY (id
),
KEY order_items_order_id
(orderId
),
KEY order_items_product_id
(productId
)
)
CREATE TABLE Permission
(
id
int NOT NULL AUTO_INCREMENT,
name
varchar(191) NOT NULL,
PRIMARY KEY (id
),
UNIQUE KEY Permission_name_key
(name
)
)
CREATE TABLE Product
(
id
int NOT NULL AUTO_INCREMENT,
name
varchar(191) NOT NULL,
description
varchar(191) NOT NULL,
price
double NOT NULL,
createdAt
datetime(3) NOT NULL DEFAULT current_timestamp(3),
updatedAt
datetime(3) NOT NULL,
categoryId
int,
PRIMARY KEY (id
),
KEY Product_categoryId_idx
(categoryId
)
)
CREATE TABLE Role
(
id
int NOT NULL AUTO_INCREMENT,
name
varchar(191) NOT NULL,
PRIMARY KEY (id
),
UNIQUE KEY Role_name_key
(name
)
)
CREATE TABLE Transaction
(
id
int NOT NULL AUTO_INCREMENT,
transactionId
varchar(191) NOT NULL,
orderId
int NOT NULL,
totalAmount
double NOT NULL,
transactionStatus
varchar(191) NOT NULL,
paymentMethod
varchar(191),
createdAt
datetime(3) NOT NULL DEFAULT current_timestamp(3),
updatedAt
datetime(3) NOT NULL,
PRIMARY KEY (id
),
UNIQUE KEY Transaction_orderId_key
(orderId
),
KEY order_id
(orderId
)
)
CREATE TABLE User
(
id
int NOT NULL AUTO_INCREMENT,
username
varchar(191) NOT NULL,
email
varchar(191) NOT NULL,
emailVerified
datetime(3),
phoneNumber
varchar(191),
hashedPassword
varchar(191) NOT NULL,
image
varchar(191),
roleId
int,
createdAt
datetime(3) NOT NULL DEFAULT current_timestamp(3),
updatedAt
datetime(3) NOT NULL,
PRIMARY KEY (id
),
UNIQUE KEY User_username_key
(username
),
UNIQUE KEY User_email_key
(email
),
KEY role_id
(roleId
)
)
I don't know if this might be useful
ENGINE InnoDB,
CHARSET utf8mb4,
COLLATE utf8mb4_unicode_ci;
Daniel Evans
(1 rep)
Oct 6, 2023, 10:43 AM
• Last activity: Oct 6, 2023, 11:48 PM
0
votes
0
answers
925
views
MySQL 8 taking 1000x times longer to execute this query vs 5.7 when ORDER BY is used
This query comes from an ORM (Laravel Eloquent) on a WordPress database. I upgraded to MySQL 8 from 5.7 (AWS RDS), and the performance went from pretty speedy to basically broken. API requests that previously took ~50ms were now taking 30-60 *seconds* to execute. It was so bad I thought it was a DB...
This query comes from an ORM (Laravel Eloquent) on a WordPress database. I upgraded to MySQL 8 from 5.7 (AWS RDS), and the performance went from pretty speedy to basically broken. API requests that previously took ~50ms were now taking 30-60 *seconds* to execute. It was so bad I thought it was a DB connection issue at first.
I used Laravel Telescope to diagnose and one particular type of query stood out immediately (see below).
I then discovered that if I removed the
order by
, the issue went away completely! (Incidentally, if I removed the big subquery, the query was also fast).
The query:
select
*
from
wp_posts
where
post_type
= 'post'
and exists (
select
*
from
wp_term_taxonomy
inner join wp_term_relationships
on wp_term_taxonomy
.term_taxonomy_id
= wp_term_relationships
.term_taxonomy_id
where
wp_posts
.ID
= wp_term_relationships
.object_id
and taxonomy
= 'tax_channel'
and exists (
select
*
from
wp_terms
where
wp_term_taxonomy
.term_id
= wp_terms
.term_id
and slug
in ('news')
)
)
and (post_status
= 'publish')
order by wp_posts
.post_date
desc
limit 2 offset 0
**explain
results for 8.0: (takes ~30-90 seconds)**
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
|----|-------------|-----------------------|------------|--------|-----------------------------------|------------------|---------|---------------------------------------------|------|----------|---------------------------------------------------------------|
| 1 | SIMPLE | wp_terms | | ref | PRIMARY,slug | slug | 766 | const | 2 | 100.00 | Using where; Using temporary; Using filesort; Start temporary |
| 1 | SIMPLE | wp_term_taxonomy | | eq_ref | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 138 | wordpress.wp_terms.term_id,const | 1 | 100.00 | Using index |
| 1 | SIMPLE | wp_term_relationships | | ref | PRIMARY,term_taxonomy_id | term_taxonomy_id | 8 | wordpress.wp_term_taxonomy.term_taxonomy_id | 238 | 100.00 | Using index |
| 1 | SIMPLE | wp_posts | | eq_ref | PRIMARY,type_status_date | PRIMARY | 8 | wordpress.wp_term_relationships.object_id | 1 | 50.00 | Using where; End temporary |
**Explain
results for 5.7 (takes ~50ms)**
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
|----|--------------------|-----------------------|------------|--------|--------------------------|------------------|---------|--------------------------------------------------|--------|----------|-------------|
| 1 | PRIMARY | wp_posts | | ref | type_status_date | type_status_date | 164 | const,const | 176172 | 100.00 | Using where |
| 2 | DEPENDENT SUBQUERY | wp_term_relationships | | ref | PRIMARY,term_taxonomy_id | PRIMARY | 8 | wordpress.wp_posts.ID | 3 | 100.00 | Using index |
| 2 | DEPENDENT SUBQUERY | wp_term_taxonomy | | eq_ref | PRIMARY,taxonomy | PRIMARY | 8 | wordpress.wp_term_relationships.term_taxonomy_id | 1 | 5.00 | Using where |
| 3 | DEPENDENT SUBQUERY | wp_terms | | eq_ref | PRIMARY,slug | PRIMARY | 8 | wordpress.wp_term_taxonomy.term_id | 1 | 5.00 | Using where |
**Explain
results for 8.0 with order by
removed (takes ~50ms)**
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
|----|-------------|-----------------------|------------|--------|-----------------------------------|------------------|---------|---------------------------------------------|------|----------|------------------------------|
| 1 | SIMPLE | wp_terms | | ref | PRIMARY,slug | slug | 766 | const | 2 | 100.00 | Using where; Start temporary |
| 1 | SIMPLE | wp_term_taxonomy | | eq_ref | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 138 | wordpress.wp_terms.term_id,const | 1 | 100.00 | Using index |
| 1 | SIMPLE | wp_term_relationships | | ref | PRIMARY,term_taxonomy_id | term_taxonomy_id | 8 | wordpress.wp_term_taxonomy.term_taxonomy_id | 238 | 100.00 | Using index |
| 1 | SIMPLE | wp_posts | | eq_ref | PRIMARY,type_status_date | PRIMARY | 8 | wordpress.wp_term_relationships.object_id | 1 | 50.00 | Using where; End temporary |
I am looking for advice on how to bring MySQL 8 performance up to the level of 5.7 with this query.
Or, a fix to the query that accomplishes the same thing. As I mentioned, this query is generated by an ORM (Eloquent) so I'm not sure how much control I have in changing the raw SQL.
Or maybe there's something I'm missing in MySQL options table?
Or, maybe I'm missing an index that could help besides the default WordPress keys?:
type_status_date BTREE FALSE post_type,post_status,post_date,ID
post_parent_id BTREE FALSE post_parent,ID
post_parent BTREE FALSE post_parent
post_name BTREE FALSE post_name(191)
post_author BTREE FALSE post_author
PRIMARY BTREE TRUE ID
**EDIT:** I've rewritten the above query as a series of JOINs instead of subqueries, but it is just as slow:
select * from wp_posts
inner join wp_term_relationships on wp_posts.ID = wp_term_relationships.object_id
inner join wp_term_taxonomy on wp_term_relationships.term_taxonomy_id = wp_term_taxonomy.term_taxonomy_id
inner join wp_terms on wp_term_taxonomy.term_id = wp_terms.term_id
where post_type = 'post'
and post_status = 'publish'
and slug in ('news')
order by post_date DESC
limit 2
**EDIT 2:**
It looks like the cause of the slowness is that MySQL 8 is choosing the PRIMARY
index instead of type_status_date
. If I force it to use the latter, then the query is pretty speedy (~100ms). Still twice as high as 5.7 but at least it's in the ballpark.
Because its an ORM writing this query, I dont think this solution is viable unfortunately.
So adding /*+ index(wp_posts type_status_date) */
makes the query run in ~100ms instead of 30-90 seconds.
select /*+ index(wp_posts type_status_date) */
*
from
wp_posts
where
post_type
= 'post'
and exists (
select
*
from
wp_term_taxonomy
inner join wp_term_relationships
on wp_term_taxonomy
.term_taxonomy_id
= wp_term_relationships
.term_taxonomy_id
where
wp_posts
.ID
= wp_term_relationships
.object_id
and taxonomy
= 'tax_channel'
and exists (
select
*
from
wp_terms
where
wp_term_taxonomy
.term_id
= wp_terms
.term_id
and slug
in ('news')
)
)
and (post_status
= 'publish')
order by
wp_posts
.post_date
desc
limit
2 offset 0
Will
(1 rep)
Mar 10, 2023, 06:33 AM
• Last activity: Mar 10, 2023, 06:59 PM
-1
votes
1
answers
535
views
How to handle a constantly changing schemas and tightly coupled entities (ORM hell)?
Essentially my team's ended up in a situation where Object A, B, C, D, and E (greatly simplified \*sigh*) are different Java objects and each is represented by a single table interconnected with other tables in the DB via foreign keys. Each of the foreign keys MUST be present for the one in the prev...
Essentially my team's ended up in a situation where Object A, B, C, D, and E (greatly simplified \*sigh*) are different Java objects and each is represented by a single table interconnected with other tables in the DB via foreign keys. Each of the foreign keys MUST be present for the one in the previous chain to exist. We have around 13 different submodules in our application, most of which rely on the shared underlying data in the same database. On top of this, new requirements come in almost daily, requiring that we change large parts of the schema (usually existing parts), our underlying objects, and areas of backend as well. A final nail is that we're being asked to add completely new dimensions to existing data that we never could have anticipated. Now business wants us to rework parts of the data model to better match the UIs representation to better aid in reporting functionality. We're using Hibernate as an ORM framework and primarily use Postgres, though we also need to interface with and have started using multiple SQL Server databases as well.
RDBMS and the relational model has been around for decades, but I'm not sure how something as rigid as the relational model has been able to survive (and continue to be used), even as businesses grow more and more complex. How can better flexibility be achieved and coupling issues be reduced in RDBMS? Our team has tried to target normalization of 3NF, but I believe we might have over-normalized. I'm not looking for a silver bullet answer, as I fear there are numerous issues (though some hints on proper ORM to RDMS design would be helpful).
Having ended up in this ORM/DB hell, my plan to escape would likely be as follows:
1. Keep entities as a database affair. Never pass an entity object between services or to the UI; always deal with DTOs unless the entity is extremely simple or represents a lookup table. By passing DTOs around, we can avoid tightly coupling interoperating services in the backend and frontend with our data layer.
2. Check particular pain points in the data model and think of how we can overcome them.
- One example is user access. Essentially, the system works by checking a combination of values that exist on multiple entities against the combinations a user has access to. Having to add new columns to this combination is difficult because it requires a refactor of every query in the entire system (since join conditions now change)--not to mention special exceptions for certain people/management levels.
- Maybe a tag system would work here? Idk just my thoughts.
I'd like to hear from the community in general. I cannot imagine my team is alone in facing these issues.
ElderFuthark
(99 rep)
Dec 2, 2022, 03:54 AM
• Last activity: Dec 2, 2022, 05:18 AM
5
votes
2
answers
62657
views
How to set a auto-increment value in SQLAlchemy?
This is my code : class csvimportt(Base): __tablename__ = 'csvimportt' #id = Column(INTEGER, primary_key=True, autoincrement=True) aid = Column(INTEGER(unsigned=True, zerofill=True), Sequence('article_aid_seq', start=1001, increment=1), primary_key=True) I want to set a autoincrement value from 1000...
This is my code :
class csvimportt(Base):
__tablename__ = 'csvimportt'
#id = Column(INTEGER, primary_key=True, autoincrement=True)
aid = Column(INTEGER(unsigned=True, zerofill=True),
Sequence('article_aid_seq', start=1001, increment=1),
primary_key=True)
I want to set a autoincrement value from 1000. How do I do it?
Farhaan Patel
(51 rep)
May 25, 2018, 07:21 AM
• Last activity: Sep 1, 2022, 04:34 AM
1
votes
1
answers
463
views
SQL Server and NHibernate - blocking occurs between two unrelated queries (distributed transactions)
there's a problem that I'm seeking your help with... There are two simple queries: 1. `DELETE` a row from a table in a databaseA 2. `SELECT` a row from a table in a databaseB using hints `with (updlock, rowlock)` Query 1 blocks query 2 The Blocked Process Report shows the following: SELECT stuff fro...
there's a problem that I'm seeking your help with...
There are two simple queries:
1.
DELETE
a row from a table in a databaseA
2. SELECT
a row from a table in a databaseB using hints with (updlock, rowlock)
Query 1 blocks query 2
The Blocked Process Report shows the following:
SELECT stuff from whatever1.table1 WHERE bunch_of_things with (updlock, rowlock)
DELETE from whatever2.table2 WHERE bunch_of_things
Looks like the blocked query is a distributed transaction (DTCXact), right?
A query to sys.dm_tran_database_transactions shows that:
- the blocked query (the SELECT) is associated with just databaseB
- the blocking query (the DELETE) is associated with 3 databases:
databaseA, **databaseB** and databaseC - wouldn't that mean a distributed transaction as well?
The queries come from NHibernate.
Would you please help me find the root cause of this blocking and how to solve it? Can it be related to misconfiguration in NHibernate? Connection pooling issue?
I'm out of ideas and I don't know much about NHibernate. I'm thinking of dropping the updlock
hint and maybe turning on Snapshot Isolation, but if it's the NHibernate's fault, then I'd rather have that corrected.
Thanks in advance...
Timbalero
(111 rep)
Dec 17, 2021, 12:59 AM
• Last activity: Aug 10, 2022, 09:04 AM
0
votes
2
answers
778
views
Is there an equivalent sql check to python's concept of truthy / falsy / bool?
I'm trying to find the usage of particular columns in a **mysql** database connected to python. Python counts the following to be 'falsy': - Empty lists: [] - Empty tuples: () - Empty dictionaries: {} - Empty strings "" - Integer: 0 - Float: 0.0 - None - False I realise this is somewhat ORM dependen...
I'm trying to find the usage of particular columns in a **mysql** database connected to python. Python counts the following to be 'falsy':
- Empty lists: []
- Empty tuples: ()
- Empty dictionaries: {}
- Empty strings ""
- Integer: 0
- Float: 0.0
- None
- False
I realise this is somewhat ORM dependent (e.g. empty datatypes are stored as NULL rather than a string) but is the below somewhat equivalent, is there a better way and is there anything obvious I'm missing? Maybe 0.0? The idea being to calculate a row_count and subtract this result. I tried
NOT IN
for truthy but didn't seem to get a consistent result.
SELECT COUNT(*) FROM tabBatch
WHERE naming_series IN ('', 0, NULL);
racitup
(103 rep)
Mar 11, 2022, 02:19 AM
• Last activity: Mar 14, 2022, 01:20 AM
1
votes
1
answers
652
views
CTE clause not able to be used in final ORDER BY statement
Trying to order by distance using an input geometry `t(x)`, while comparing to geometries from a JOINed table. ``` WITH "nearest_groups" as ( SELECT groups.id, ST_Distance(t.x, locations.center) AS nearest FROM (SELECT ST_GeographyFromText('SRID=4326;POINT(-121.0611 39.2191)') ) AS t(x), groups INNE...
Trying to order by distance using an input geometry
t(x)
, while comparing to geometries from a JOINed table.
WITH "nearest_groups" as (
SELECT groups.id, ST_Distance(t.x, locations.center) AS nearest
FROM (SELECT ST_GeographyFromText('SRID=4326;POINT(-121.0611 39.2191)')
) AS t(x), groups
INNER JOIN locations ON groups.location_id = locations.id
WHERE ST_DWithin(t.x, locations.center, 300000)
)
SELECT *
FROM "groups"
INNER JOIN "nearest_groups" ON "groups"."id" = "nearest_groups"."id"
ORDER BY "nearest_groups"."nearest" asc
>Error: column "nearest_groups.nearest" must appear in the GROUP BY clause or be used in an aggregate function*
I don't understand what the error measure needs me to do to make this query work. Does it make sense to throw an GROUP BY in there? I'm not familiar with aggregate functions either.
(!!!) The query seems to work fine in PSQL but not in our ORM env (bookshelfjs/Knex). I find this alarming; ORMs give me the ever present fear that I will have to arm-wrestle them into doing what I want
UPDATE: We are using GraphQL and a common pattern is to fetch items that can be paged by tacking on a hasMore
boolean and total
count. SO
Somewhere else, 'hasMore' and 'total' are being compiled, and it is THERE that this error is being throw, since those are using an aggregate function
Underwater_developer
(121 rep)
Feb 10, 2022, 07:00 PM
• Last activity: Feb 10, 2022, 11:04 PM
0
votes
2
answers
163
views
Subquery into join issue
I'm trying hard as I can to transform my query to get rid of subqueries. I know its better to not use subqueries, but my first reason to change this query is because my ORM (Doctrine) isn't able to use any join with subquery, it doesn't support it (or CTEs). Is there a way to get rid of subqueries i...
I'm trying hard as I can to transform my query to get rid of subqueries. I know its better to not use subqueries, but my first reason to change this query is because my ORM (Doctrine) isn't able to use any join with subquery, it doesn't support it (or CTEs).
Is there a way to get rid of subqueries in this query?
SELECT
s.id,
e.exception,
s.name,
w.url,
w.web_id,
w.active,
w.suspended,
r.email,
p.name AS partner,
p.id AS partnerId,
contacts.names AS contactNames,
contacts.tels AS contactTels,
contacts.emails AS contactEmails
FROM
service s
JOIN
web w ON s.web_id = w.id
JOIN
rus r ON w.rus_id = r.id
JOIN
partner p ON r.partner_id = p.id
LEFT JOIN
exception e ON e.service_id = s.id
LEFT JOIN
(SELECT
p.id,
GROUP_CONCAT(c.name) names,
GROUP_CONCAT(c.tel) tels,
GROUP_CONCAT(c.email) emails
FROM
partner p
LEFT JOIN contact c ON c.partner_id = p.id
WHERE
c.main = 1 OR c.important = 1
GROUP BY p.id) contacts ON contacts.id = p.id
LEFT JOIN
contact c ON c.partner_id = p.id
Tables and sample data:
CREATE TABLE partner
(
id
int(11) NOT NULL AUTO_INCREMENT,
name
varchar(255) COLLATE utf8_unicode_ci NOT NULL,
ico
varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
created
datetime NOT NULL,
active
int(11) NOT NULL,
PRIMARY KEY (id
),
KEY partner_idx_active
(active
),
FULLTEXT KEY partnerEntity
(name
,ico
)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
INSERT INTO partner
(id
, name
, ico
, created
, active
) VALUES
(1, 'partner1', '123', '2021-05-18 22:27:24', 1);
CREATE TABLE contact
(
id
int(11) NOT NULL AUTO_INCREMENT,
partner_id
int(11) DEFAULT NULL,
name
varchar(255) COLLATE utf8_unicode_ci NOT NULL,
email
varchar(255) COLLATE utf8_unicode_ci NOT NULL,
tel
varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
created
datetime NOT NULL,
active
int(11) NOT NULL,
main
int(11) DEFAULT NULL,
important
int(11) DEFAULT NULL,
PRIMARY KEY (id
),
KEY IDX_4C62E6389393F8FE
(partner_id
),
FULLTEXT KEY contactEntity
(name
,email
,tel
),
CONSTRAINT FK_4C62E6389393F8FE
FOREIGN KEY (partner_id
) REFERENCES partner
(id
)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
INSERT INTO contact
(id
, partner_id
, name
, email
, tel
, created
, active
, main
, important
) VALUES
(1, 1, 'contact1', 'test@test.com', '123456789', '2021-05-18 22:28:30', 1, 1, NULL),
(2, 1, 'contact2', 'test2@test.com', '123456788', '2021-05-18 22:28:48', 1, NULL, 1),
(3, 1, 'contact3', 'test3@test.com', '123451234', '2021-05-18 22:29:13', 1, NULL, NULL);
CREATE TABLE rus
(
id
int(11) NOT NULL AUTO_INCREMENT,
partner_id
int(11) DEFAULT NULL,
email
varchar(255) COLLATE utf8_unicode_ci NOT NULL,
created
datetime NOT NULL,
active
int(11) NOT NULL,
PRIMARY KEY (id
),
KEY IDX_3370C8199393F8FE
(partner_id
),
KEY rus_idx_active
(active
),
FULLTEXT KEY rusEntity
(email
),
CONSTRAINT FK_3370C8199393F8FE
FOREIGN KEY (partner_id
) REFERENCES partner
(id
)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
INSERT INTO rus
(id
, partner_id
, email
, created
, active
) VALUES
(1, 1, 'test@test.com', '2021-05-18 22:27:36', 1);
CREATE TABLE service
(
id
int(11) NOT NULL AUTO_INCREMENT,
web_id
int(11) DEFAULT NULL,
name
varchar(255) COLLATE utf8_unicode_ci NOT NULL,
created
datetime NOT NULL,
active
int(11) NOT NULL,
PRIMARY KEY (id
),
KEY IDX_E19D9AD2FE18474D
(web_id
),
KEY service_idx_active
(active
),
FULLTEXT KEY serviceEntity
(name
),
CONSTRAINT FK_E19D9AD2FE18474D
FOREIGN KEY (web_id
) REFERENCES web
(id
)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
INSERT INTO service
(id
, web_id
, name
, created
, active
) VALUES
(1, 1, 'service1', '2021-05-18 22:28:08', 1);
CREATE TABLE exception
(
id
int(11) NOT NULL AUTO_INCREMENT,
partner_id
int(11) DEFAULT NULL,
exception
longtext COLLATE utf8_unicode_ci NOT NULL,
created
datetime DEFAULT NULL,
service_id
int(11) DEFAULT NULL,
PRIMARY KEY (id
),
KEY FK_7FC98E6D9393F8FE
(partner_id
),
KEY FK_7FC98E6DED5CA9E6
(service_id
),
CONSTRAINT FK_7FC98E6D9393F8FE
FOREIGN KEY (partner_id
) REFERENCES partner
(id
),
CONSTRAINT FK_7FC98E6DED5CA9E6
FOREIGN KEY (service_id
) REFERENCES service
(id
)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
INSERT INTO exception
(id
, partner_id
, exception
, created
, service_id
) VALUES
(1, 1, 'test..', '2021-05-18 22:31:14', 1);
CREATE TABLE web
(
suspended
int(11) NOT NULL,
id
int(11) NOT NULL AUTO_INCREMENT,
rus_id
int(11) DEFAULT NULL,
url
varchar(255) COLLATE utf8_unicode_ci NOT NULL,
web_id
int(5) unsigned zerofill DEFAULT NULL,
created
datetime NOT NULL,
active
int(11) NOT NULL,
PRIMARY KEY (id
),
KEY IDX_15C9385126907186
(rus_id
),
KEY web_idx_active
(active
),
FULLTEXT KEY webEntity
(url
),
CONSTRAINT FK_15C9385126907186
FOREIGN KEY (rus_id
) REFERENCES rus
(id
)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
INSERT INTO web
(suspended
, id
, rus_id
, url
, web_id
, created
, active
) VALUES
(0, 1, 1, 'www.test.com', 01337, '2021-05-18 22:27:54', 1);
dbfiddle
Jakub Stibůrek
(3 rep)
May 18, 2021, 07:51 PM
• Last activity: May 21, 2021, 10:12 AM
6
votes
3
answers
1593
views
Is the ADO.NET Entity Framework known for generating poorly performing queries?
This question is inspired by [the comment][1] posted to [the latest ServerFault blog post][2]: > Are you guys still using LINQ to SQL? I know I can use SQL Server Profiler and/or [the `ToTraceString` method][3] to see the generated queries and analyze them myself. However, I am looking for opinions...
This question is inspired by the comment posted to the latest ServerFault blog post :
> Are you guys still using LINQ to SQL?
I know I can use SQL Server Profiler and/or the
ToTraceString
method to see the generated queries and analyze them myself. However, I am looking for opinions from **people with hands-on experience with administering databases accessed by applications utilizing the Entity Framework**.
Are Entity Framework queries a common cause of performance problems?
Can LINQ queries be optimized in such cases or is raw Transact-SQL the only solution?
Marek Grzenkowicz
(1495 rep)
Feb 15, 2011, 09:02 PM
• Last activity: Feb 4, 2021, 08:36 AM
0
votes
1
answers
412
views
MySQL id not saving in order (Laravel in production)
[![enter image description here][1]][1] I'm really confused about this.. So, I created a Laravel app and hosted it on Heroku. I'm using ClearDB extension to be able to use MySQL. Problem is: when I save a new User on my DB, it is not being saved in ID order. I got id 1, then id 11 for the second reg...

GabrielLearner
(3 rep)
Jun 22, 2020, 11:58 PM
• Last activity: Jun 23, 2020, 12:39 AM
7
votes
4
answers
567
views
How to troubleshoot orm queries interfering with other applications
Well, it's hypothetical scenario but what I'm trying to understand is the path to go from a post-morten log (say, an SQL Server Profiler trace) to identify the code on a ORM situation. To make it not too vague, consider an scenario like that: - SQL Server 2008 - Entity Framework as a ORM So, in that...
Well, it's hypothetical scenario but what I'm trying to understand is the path to go from a post-morten log (say, an SQL Server Profiler trace) to identify the code on a ORM situation. To make it not too vague, consider an scenario like that:
- SQL Server 2008
- Entity Framework as a ORM
So, in that scenario, what is the common path to a DBA (which is also an VB.Net developer) make from the log to diagnosing which code(s) (in this case, Linq queries) are creating the trouble?
In this case, the app is ok but is affecting other apps response time using the same DB/server.
That would be absurdly different from a Java+Hiberate process?
EDIT: I want to understand the path from the trace to the culprit metaquery. If the app have SQL in it, this means that an "Find in Files" session (maybe with some regex, in extreme cases) could reduce the inspect task' targets to some dozen suspects instead of tenths or even hundreds of source files.
Using an ORM, how to get to that stage using an ORM (in this case: EF)?
Fabricio Araujo
(957 rep)
Oct 4, 2012, 09:21 PM
• Last activity: May 13, 2020, 06:50 PM
0
votes
1
answers
1161
views
In general should we avoid using ORM's with Redshift
My team is considering utilizing Hibernate/Envers for a granular point in time rollback solution for Redshift data. I have heard that using an ORM with Redshift is not a good idea. Are there particular drawbacks related to using an ORM such as hibernate with Redshift?
My team is considering utilizing Hibernate/Envers for a granular point in time rollback solution for Redshift data. I have heard that using an ORM with Redshift is not a good idea. Are there particular drawbacks related to using an ORM such as hibernate with Redshift?
mikelus
(343 rep)
Apr 1, 2020, 09:20 PM
• Last activity: Apr 1, 2020, 10:09 PM
0
votes
0
answers
1088
views
help in designing a database for medical clinic keeping record of patient visit
I am making software for patients record keeping in a clinic . I have written a peewee model for setting up tables to store record of each patient visit and patient contact and a relative table that store name of parent or husband (in case female patient is married) import peewee from app.datastore....
I am making software for patients record keeping in a clinic . I have written a peewee model for setting up tables to store record of each patient visit and patient contact and a relative table that store name of parent or husband (in case female patient is married)
import peewee
from app.datastore.core import db as _db
class Patient(peewee.Model):
""" Patinets table """
name = peewee.CharField(max_length=90)
townCity = peewee.CharField(max_length=200)
contactnumber = peewee.CharField(max_length=20)
age = peewee.IntegerField(default=0)
gender = peewee.CharField(max_length=9)
email = peewee.CharField(max_length=1024, null=True)
postalAddress = peewee.TextField()
def __repr__(self):
return "".format(self.id)
class Meta:
database = _db
class Relative(peewee.Model):
""" Visit attendence """
patient = peewee.ForeignKeyField(Patient,
related_name="patient")
relation = peewee.CharField(max_length=10)
relativeName = peewee.CharField(max_length=90)
class Meta:
database = _db
# the below table continues to increment on each visit of new or old patient by frontdesk.
class PatientAttendOnVisit(peewee.Model):
""" Attendence or visit record """
patient = peewee.ForeignKeyField(Patient,
related_name="patient")
dateTimeofvisit = peewee.DateTimeField()
attendtype = peewee.CharField(max_length=12) # walkin or appointment
department = peewee.CharField(max_length=9) # medicine or dental
class Meta:
database = _db
# the below table continues to increment on each patient visit by doctor
class PatientCheckup(peewee.Model):
"""
Patient Checkup table stores what patient reports on each visit
to doctor like symptoms, diagnosis and presciption
"""
patient = peewee.ForeignKeyField(Patient,
related_name="patient")
chiefcomplaint = peewee.TextField()
symptoms = peewee.TextField()
diagnosis = peewee.TextField()
treatment = peewee.TextField()
presciption = peewee.TextField()
dateTimeofvisit = peewee.ForeignKeyField(PatientAttendOnVisit)
class Meta:
database = _db
I am very new to database driven application with no knowledge of db design
please help me design the best database and schema ?

Ciasto piekarz
(139 rep)
Dec 24, 2018, 03:29 PM
• Last activity: Nov 17, 2019, 02:24 PM
Showing page 1 of 20 total questions