Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
161
views
Alternative to multi-master replication between local and cloud db when writes are infrequent and only 1 db at a time
**Background:** I have a closed-source app on my laptop that saves its data in a local SQLite database. I'm working on creating a mobile app for myself that would replicate the functionality of the desktop app, but run on mobile, consequently necessitating a Cloud database, for example any of the GC...
**Background:**
I have a closed-source app on my laptop that saves its data in a local SQLite database. I'm working on creating a mobile app for myself that would replicate the functionality of the desktop app, but run on mobile, consequently necessitating a Cloud database, for example any of the GCP SQL offerings
**Constraints**:
I will be the sole user of the application, so the DB writes will be very infrequent. Moreover, it can be guaranteed that no writes will happen simultaneously to the local and cloud DBs. The access pattern is more like:
1. Do some work on the laptop, work saved to local db
2. Sync data to cloud after app closes
3. Open app on phone sometime later, read and write to cloud db
4. Open laptop sometime later, get updates from cloud into local
5. Repeat
**Issue:**
Data needs to be eventually consistent between the mobile and the desktop app, i.e. the local SQLite and the cloud DB. I've read about multi-master replication, but this seems like an overkill since only one database is accessed at a time, and I feel that some simpler solution might fit the bill here. How could I solve this challenge?
**Unsubstantiated idea?**: Perhaps it would be possible to emit a Pub/Sub event on writes to either db and have a serverless function listen on local write events, replicating them to the cloud db, and a daemon on the laptop, replicating the cloud writes to local. Would something like this work?
mikemykhaylov
(1 rep)
Mar 13, 2024, 11:20 PM
• Last activity: Jul 13, 2025, 03:03 PM
0
votes
1
answers
163
views
How do I set up one server to replicate X schemas from X different server, without resorting to mysqldump?
In other words; I'm trying to figure out how to restore multiple mariabackup dumps to one server (different databases). I assume I'm missing something obvious in the documentation, but I've been looking for months with no luck. I have a number of different databases (different schema names) on a num...
In other words; I'm trying to figure out how to restore multiple mariabackup dumps to one server (different databases). I assume I'm missing something obvious in the documentation, but I've been looking for months with no luck.
I have a number of different databases (different schema names) on a number of different servers. I would like to have all these databases replicated to one server, for various reasons. All the databases have replication set up and working already (each have one master and several replicas), that's no issue. It's "just" the part where I want to add an extra server, which should replicate all schemas, that's causing me headache.
I know how to create such a replica by using mysqldump and importing each db individually. But in this case, all databases are far too large for mysqldump, so I (believe I) need to use mariabackup. But I simply can't figure out how to import multiple dumps into one server instance.
Again; I'm sure I'm missing something obvious. Any hint will be greatly appreciated :)
I know it's not that simple, but all I want for christmas is the ability to set up a blank server, and tell it to "replicate this schema from that server", after which the server should "just" stream the entire database from the other server - redis style :P
Vonsild
(101 rep)
Dec 9, 2021, 09:33 AM
• Last activity: Jul 12, 2025, 01:07 PM
0
votes
1
answers
235
views
SQL SERVER: Header Blocker Across Multiple Databases with wait type CXCONSUMER
We have an instance of SQL Server which has multiple databases. A process in one database seems to be blocking a process in another database. When I look in activity monitor I can see a header blocker (A one in the header blocker column). This seems to be blocking other processes in different databa...
We have an instance of SQL Server which has multiple databases. A process in one database seems to be blocking a process in another database. When I look in activity monitor I can see a header blocker (A one in the header blocker column). This seems to be blocking other processes in different databases. I can see their ids in the blocked by column when I select one from the drop down. Am I correct that it is cross database blocking? I didn't think this was possible. They are all running exactly the same stored procedure, but they have their own instance in each database. They are doing updates and inserts, but only within their own databases.
eg
UPDATE SCA
SET SCA.date_last_read_only = TDR.date_seen
FROM [dbo].[SINGLE_CERT_ACC] SCA
INNER JOIN #TMP_DELTA_READONLY TDR
ON SCA.id = TDR.id
SET @RecsUpdated = @RecsUpdated + @@ROWCOUNT
SQLMIKE
(437 rep)
Feb 14, 2020, 05:04 PM
• Last activity: Jun 7, 2025, 01:00 PM
-1
votes
2
answers
83
views
Merging Data From Multiple Tables With Nesting/Loops
Long story short I need to replace a website component with a newer one. Problem is the way the old one stored information in the database is very convoluted. I have a new empty table (for the new component) with 3 columns: | field_id | item_id | value | Each value has a different item_id but field_...
Long story short I need to replace a website component with a newer one. Problem is the way the old one stored information in the database is very convoluted.
I have a new empty table (for the new component) with 3 columns:
| field_id | item_id | value |
Each value has a different item_id but field_id is a static number (2). I can populate the table from the old component's table (where x is the data I am unable to get from the old table):
INSERT INTO new_table (field_id, item_id, value)
SELECT 2, x , content FROM old_table WHERE condition;
The problem is item_id (x above), the old component has this value stored in a different table but the only thing linking the two values is an asset_id from ANOTHER table. So for each row I somehow need to do the following:
1. Get a value called asset_id from old_table_form,
2. Then in a table old_table_content I need the value item-id from the same row as asset_id,
3. Save this value in new_table as item_id for each row.
I thought maybe nesting SELECT commands could work but I am having trouble even visualizing how it should be, or maybe a loop with pseudo-code something like:
LOOP
DECLARE var1
DECLARE var2
SELECT FROM old_table_form -> 'asset_id'
var1 = asset_id
SELECT FROM old_table_content -> 'item_id' from 'asset_id' row
var2 = 'item_id'
save var2 in new table as 'item_id'
repeat for all rows in table
END LOOP
Is something like this even possible in MySQL/phpmyadmin? Any ideas/advice would be greatly appreciated.
rudiments
(7 rep)
Oct 18, 2024, 11:50 PM
• Last activity: Oct 19, 2024, 06:33 PM
1
votes
2
answers
183
views
Scaling from Multiple Database to Single Database Architecture in SQL Server
My application is centered around self-contianed "workspaces". For many really good reasons (everything from management to security), we have always had a one-database-per-workspace architecture. Each database has identical schema, stored procedures, triggers, etc. There is a "database of databases"...
My application is centered around self-contianed "workspaces". For many really good reasons (everything from management to security), we have always had a one-database-per-workspace architecture. Each database has identical schema, stored procedures, triggers, etc. There is a "database of databases" that coordinates all of this. Works great.
The problem: scalability. It was recently proposed that a customer might want to have 100,000 workspaces. Obviously this is a non-starter for one SQL instance. Plus, each workspace might be rather small, but there'd also be a very wide size distribution - the biggest workspace could be 100x the size of the _median_. The top 1% of workspaces could easily constitute 90+% of the rows across all workspaces.
I'm looking for options for rearchitecting things to support this scenario, and here are some things I've considered and the issues I see with each.
- Keep the multi-database architecture but spread across multiple SQL instances. The problem is cost (both administrative and infrastructure). If we stick to a limit of 1,000 DBs on each instance, that's still 100 instances, spread across who knows how many actual VMs. But since so many of the workspaces will be small (much smaller than our current average), the revenue won't nearly scale accordingly. So I think this is probably out of the question and I'm focusing now on single-database architectures.
- Every workspace shares the same tables, indexed by workspace ID. So every table would need a new workspace ID column and every query needs to add the workspace condition in the WHERE clause (or more likely every real table is wrapped in an inline table-valued function that takes the WorkspaceID; anyway...) The primary key of every table would also have to be redefined to include the workspace ID since not every PK now is globally unique. Programming-wise this is all fine, but even with proper indexing and perfect query design (and no, not all our queries are perfect - the dreaded row scan still happens on occasion) is there any conceivable way this could perform as well - for everyone - as separate databases? More specifically can we guarantee that small projects won't suffer from the presence of big projects which could be taking up 100x more rows than the small ones? And what specific steps would need to be taken, whether it be the type of index to use or how to write queries to guarantee that the optimizer always narrows things down by workspace ID before it does literally anything else?
- Partitioning - from what I've read, this doesn't help with query performance, and it appears MS recommends limiting tables or indexes to 1000 partitions so this also won't help.
- Create the same set of tables but with a new schema for each workspace. I thought of this because there are no limits to the number of tables a database can have other than the overall 2G object limit. But I haven't explored this idea much. I'm wondering if there would be performance concerns with 100,000 schemas and millions of tables, views, stored procs, etc.
With all that, here is the specific question -
What specific features of SQL Server, and/or general strategies, including but not limited to things I've considered, would be most useful for maintaining a large number of self-contained data sets with identical schemas in a single giant database? To reiterate, maintaining performance as close as possible to a multi-database architecture is of top priority.
And needless to say, if any part of my assessment above seems incorrect or misguided I'd be glad to be corrected. Many thanks.
Peter Moore
(113 rep)
Aug 17, 2023, 05:30 PM
• Last activity: Aug 20, 2023, 06:23 PM
0
votes
0
answers
22
views
Separating a single database into one user/transaction database and one time series database
My project involves storing sensor data (time series data) in db. There are about 5,000 sensors, each is sending data at 15 min to 1 hour interval. The existing service is using a single db, storing all frontend user, transaction and sensor data. The table holding time series data now has over 10 mi...
My project involves storing sensor data (time series data) in db. There are about 5,000 sensors, each is sending data at 15 min to 1 hour interval. The existing service is using a single db, storing all frontend user, transaction and sensor data. The table holding time series data now has over 10 million rows. Other tables such as user table has a few thousands rows.
If separating sensor data into a dedicate db (say Postgres with TimescaleDB extension), will there be significant benefits in terms of performance, that out-weights the complexity introduced?
Oliver Lu
(1 rep)
Feb 11, 2023, 03:25 PM
0
votes
0
answers
1486
views
Handling multiple users/ databases in postgres
Whats the best way to organize seperate companies/users in postgres? I've developed a multi user platform, where users have access to only their companies data. I'm no DB engineer and initially the project was only for one user, but now I've had interest from others. Although it's not an issue curre...
Whats the best way to organize seperate companies/users in postgres?
I've developed a multi user platform, where users have access to only their companies data. I'm no DB engineer and initially the project was only for one user, but now I've had interest from others.
Although it's not an issue currently, I'm just looking ahead, and don't want to host each companies version on another server.
Should I be creating multiple databases for each company/user set? Or should I keep all the data in one database and with the tables (i.e customers) and company_id column to each table and add this clause to all queries from that user?
I feel like multiple databases would keep the users data separate and probably help with execution speed as there is less bloat in the tables? But harder to migrate any changes across all when an update is required.
On the other hand, adding the new fields to all tables and updating all the queries is a bit of a pain, but keeps things all in one place.
Any help or guidance appreciated.
Lewis Morris
(101 rep)
Jun 13, 2022, 07:36 AM
• Last activity: Jun 13, 2022, 09:33 AM
0
votes
1
answers
2867
views
Linking foreign keys across multiple databases: direct, or using an intermediary table?
I want to make a part of my application reusable, and that warrants moving the corresponding tables into a separate database. So for the sake of an example, please consider the two imaginary databases in the list that follows. (More databases sharing the same logic may be added as the project grows....
I want to make a part of my application reusable, and that warrants moving the corresponding tables into a separate database. So for the sake of an example, please consider the two imaginary databases in the list that follows. (More databases sharing the same logic may be added as the project grows.)
-
users
containing tables related to user sign ups, login and e-mail history, password reset tokens etc., as well as the accounts themselves; and
- blogs
having tables for posts, media files, comments, etc.
Each table in the blogs
database must obviously have an account_id
column referring as a foreign key to users.accounts.id
. (I do realise that to make it work both databases [must use InnoDB and reside on the same server](https://stackoverflow.com/questions/18274299/how-can-you-create-a-foreign-key-to-a-table-in-another-database-with-workbench).)
My question is what would be a better practice:
- direct reference to another database:
- simply refer blogs.posts.account_id
to users.accounts.id
(repeat with all other blogs.*
tables),
- make each reference CASCADE ON DELETE; or
- using an intermediary table:
- create an intermediary table blogs.accounts
having only one column called id
; then
- on one hand, refer every table inside the blogs
database to that intermediary table (so blogs.posts.account_id
to blogs.accounts.id
, CASCADE ON DELETE); and
- on the other hand, finish by referring this blogs.accounts.id
to the 'upstream' users.accounts.id
, make sure to CASCADE ON DELETE as well.
The latter seems like an unnecessary complication. But the only advantage I can think of is this can make the setup future proof in case we end up having to still migrate one (or some) of the databases to another server:
- If we link the tables directly, after the migration the blogs
database will have lots of disparate account_id
columns that won't CASCADE ON DELETE
- But if these intermediary tables get disconnected from the upstream users.accounts.id
, their neighbouring tables in each respective database are still linked to them. This way we can continue benefitting from at least somewhat integrity and CASCADEs. In other words, if a user gets deleted, all we have to do is have a script go through each of these *.accounts
connector tables and delete the id counterpart once, and CASCADE will take care of the rest of the tables inside of that database automatically.
Am I on the right track with this logic, or am I missing some other ways to handle this more effectively, and therefore reinventing the wheel?
ᴍᴇʜᴏᴠ
(123 rep)
Apr 28, 2021, 09:46 PM
• Last activity: Apr 29, 2021, 04:23 AM
2
votes
2
answers
550
views
Does it matter which database you connect to when querying across multiple databases?
I'm building a Sharepoint application with Nintex workflow that runs a single SQL query over multiple databases (on the same MS SQL server). Does it matter which database I specify in the connection string in terms of speed? So if my query looks like this: ```SELECT col1, col2 FROM db1.table UNION A...
I'm building a Sharepoint application with Nintex workflow that runs a single SQL query over multiple databases (on the same MS SQL server). Does it matter which database I specify in the connection string in terms of speed?
So if my query looks like this:
col1, col2 FROM db1.table UNION ALL
col1, col2 FROM db2.table UNION ALL
col1, col2 FROM db3.table
would it make any difference if my connection string looks like this:
=***;Database=db1; Integrated Security=SSPI; Connection Timeout=900
or this
=***;Database=db2; Integrated Security=SSPI; Connection Timeout=900
or this?
=***;Database=db3; Integrated Security=SSPI; Connection Timeout=900
The table from db1 has more records than db2, which has more records than db3.
EDIT: My query is actually more complex than what I wrote above, I just simplified it because I didn't know that would matter. The real queries have a
clause and a JOIN
on a fourth database (db4).
The compatibility level of db2 and db3 are Server 2008 (100)
, for db1 and db4 it's Server 2017 (140)
.
Teebs
(121 rep)
Jan 7, 2021, 09:27 AM
• Last activity: Jan 11, 2021, 08:15 AM
1
votes
2
answers
178
views
How to scale with MySQL (when not ready to scale properly)
We are using MySql. **My situation** - I have a large number of tables with millions of rows each. Those tables are updated every second and are used both for adding info and retrieving info. Each table can be 5GB or 10GB or even more. - I have one table that I keep sums of information (something li...
We are using MySql.
**My situation**
- I have a large number of tables with millions of rows each. Those tables are updated every second and are used both for adding info and retrieving info. Each table can be 5GB or 10GB or even more.
- I have one table that I keep sums of information (something like a summary table of the information I need) but this is starting to get big in size as well.
**My limitations**
- at the moment I cannot change database due to various reasons (mainly no knowledge, time and budget)
- all the extra power that we add to the server goes to other resources needed so I cannot run very heavy queries
**Temporary ways I have thought for scaling**
Having these things in mind I am trying to think of ways to scale with what I have:
- For the tables with millions of rows I have thought to keep to separate databases (could make my life easier for backups / exports / changes). Keep my main data in 1 database and all peripherals (huge tables) to other databases. Let’s say have a different database for a different need.
- For the problem with the table that I really need regularly and is growing fast I was thinking into splitting it into XX tables. Could be 1 table per user (which might be too much) or 1 table per XXX users.
Are these ideas totally crazy and really bad DB design?
If yes..... any suggestions other than changing everything at once?
Vicky Dallas
(21 rep)
May 9, 2020, 01:32 AM
• Last activity: May 10, 2020, 07:19 PM
0
votes
1
answers
60
views
I dont know where to begin: mysql select with inner selects on iterations too slow with bigger tables ( 4 tables on 2 databases)
**Kontext is easy:** A) A [discussion-thread.db1] has many [Posts.db1] B) each Post has an [author.db2] and 1 to 10 [attachments.db1] **Additional Information:** discussion, posting & attachment's-meta-data are on one database - the details of the author (= user) are on another database, maybe locat...
**Kontext is easy:**
A) A [discussion-thread.db1] has many [Posts.db1]
B) each Post has an [author.db2] and 1 to 10 [attachments.db1]
**Additional Information:**
discussion, posting & attachment's-meta-data are on one database - the details of the author (= user) are on another database, maybe located physically at the other side of the planet
at the moment, I do have an "outer" SELECT for every posting and (up to) 11! iterative inner SELECTS:
SELECT Posts WHERE thread-id = $thread_row_id => WHILE $posts_row FETCH{
SELECT author.db2 WHERE $posts_row_userId
make the SELECT of attachments.db1 if $posts_row_a1, a2 ... a10 has avalue:
(means up to 10x-iteration for the attachment-SELECT):
SELECT attachments WHERE $post_row n
nxt iteration
}
... this works for a mock-up, but with some more postings with up to 10 attachments on each posting this thing breaks.
Unfortunately, I do not know how I can consolidate this into one query, so I can fetch over this array.
**I would break the problem into two aspekts:**
1. the -up to 10- Attachments.db1 for every Posts.db1 entry
2. the author.db2 for each of the Posts.db1 entries
To 1.) Do I need to make a JOIN for every single attachment (I do have a fixed value of maximum 10 attachments.id fields in the Posts.db1 table - so 10 little JOINs of the attachments.db1 table WHERE id = Posts.db1.a1 to Posts.db1.a10 could be done.
but to 2.)
a) the postings - and even more the authors of a thread are in an overseeable number;
b) but the database is another one - when running in docker swarm mode this could be physically located in another continent:
=> might it be a good idea to make an extra query to that database prior and get all the needed user-data of the authors into a sub-array - and joining them server-side?
> ---------- start big edit: adding the CREATE & SELECT queries------------
*Background: its a discussion-board for project-collaboration for single SCRUM-teams (part of my remote scrum application), so: the load is not too heavy.
(the discussionthreads are task-related, so a maximum of 50 postings with each 5 attachments would be a realistic metric (a more realistic average is probably 10 postings with each 3 attachments per thread)*
**Info: discussionthreads contain postings - postings each contain a userprofile AND up to 10 attachments**
**I) CREATE TABLES**
(with discussionthreads, postings & attachments are on the projectdatabase
and userprofiles are on the metadatabase)
**a) the discussion-threads (the base-query):**
$sql = CREATE TABLE IF NOT EXISTS discussionthreads(
ID int(255) unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY,
Pid int(255) NOT NULL DEFAULT '0',
topic varchar(255) NOT NULL DEFAULT 'not set',
dscrption varchar(255) NOT NULL DEFAULT 'not set',
closed int(3) NOT NULL DEFAULT '0',
created timestamp NOT NULL,
usrID int(8) NOT NULL DEFAULT '0',
timestamp varchar(255) NOT NULL DEFAULT 'not set',
value int(5) NOT NULL DEFAULT '0',
item int(8) NOT NULL DEFAULT '0',
sprintid int(4) NOT NULL DEFAULT '0',
cmplxty int(3) NOT NULL DEFAULT '0',
innovation int(3) NOT NULL DEFAULT '0',
riskstmnt varchar(2550) NOT NULL DEFAULT '0',
dependsup int(3) NOT NULL DEFAULT '0',
dependsdwn int(3) NOT NULL DEFAULT '0',
depstmnt varchar(2550) NOT NULL DEFAULT '0',
status int(8) NOT NULL DEFAULT '0'
);
**b) the postings (WHERE postings.tid = discussionthreads.ID):**
$sql = CREATE TABLE IF NOT EXISTS postings(
ID int(255) unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY,
usrsID int(8) NOT NULL,
tid int(255) NOT NULL,
Pid int(255) NOT NULL,
activ int(2) NOT NULL DEFAULT '1',
reAW int(255) DEFAULT NULL,
project varchar(133) NOT NULL DEFAULT 'not set',
wbsnom varchar(133) NOT NULL DEFAULT 'not set',
wbsnr varchar(33) NOT NULL DEFAULT '0',
actnom varchar(133) NOT NULL DEFAULT 'not set',
actnr varchar(33) NOT NULL DEFAULT '0',
username varchar(133) NOT NULL,
email varchar(133) NOT NULL DEFAULT 'not set',
topic varchar(133) NOT NULL DEFAULT 'not set',
text text NOT NULL,
created timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
qr text DEFAULT NULL,
im text NOT NULL,
ap text NOT NULL,
ss text DEFAULT NULL,
a1 int(2) DEFAULT NULL,
a2 int(2) DEFAULT NULL,
a3 int(2) DEFAULT NULL,
a4 int(2) DEFAULT NULL,
a5 int(2) DEFAULT NULL,
a6 int(2) DEFAULT NULL,
a7 int(2) DEFAULT NULL,
a8 int(2) DEFAULT NULL,
a9 int(2) DEFAULT NULL,
a10 int(2) DEFAULT NULL
);
**c) the attachments (note: every posting can have up to 10 attachments assigned/JOINED via postings.a1, postings.a2 ... postings.a10)**
*(WHERE attachments.id = postings.a1 / attachments.id = postings.a2 / .... / attachments.id = postings.a10)*
$sql = CREATE TABLE IF NOT EXISTS attachments(
id int(11) unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY,
filestoredas varchar(255),
filelocation varchar(255),
fname varchar(255),
fdescription varchar(255),
ftype varchar(255),
fsize varchar(64),
fext varchar(64),
wbsid varchar(255),
wbsname varchar(255),
wbsdeliver varchar(255),
threadid varchar(255),
misc2 varchar(255),
miscn varchar(255),
timestored varchar(255) NOT NULL
);
**d) the user-profiles (its the only query from another database)**
$sql ="CREATE TABLE IF NOT EXISTS userprofiles(
id int(16) unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY,
usrsID int(16),
username varchar(133) NOT NULL,
email varchar(255) NOT NULL,
image_type varchar(25) NOT NULL DEFAULT 'nopic',
image longblob,
image_size varchar(25) NOT NULL DEFAULT '''''',
avatar varchar(133) NOT NULL DEFAULT '''''',
image_type2 varchar(25) DEFAULT 'nopic',
image2 longblob,
image_size2 varchar(25) NOT NULL DEFAULT '''''',
avatar2 varchar(133) NOT NULL DEFAULT '''''',
fname varchar(255) NOT NULL DEFAULT 'myFirstName',
lname varchar(255) NOT NULL DEFAULT 'myLastName',
phone varchar(255) NOT NULL DEFAULT '+00 0000 000-00',
daytime varchar(32) NOT NULL DEFAULT 'daytime',
tzone varchar(64) NOT NULL DEFAULT 'timezone',
position varchar(666) NOT NULL DEFAULT 'MyRole MyPosition in this project',
skills text NOT NULL DEFAULT 'MySkills related to MyRole',
interestedin text NOT NULL DEFAULT 'MyInterests private or business',
comment text NOT NULL DEFAULT 'No Stereotypes, but myCulture, myPassion, myPreferences',
linklist text NOT NULL DEFAULT 'Some links. For work, for fun, for sharing some interests...',
thorie_nearness varchar(11) NOT NULL DEFAULT '0',
thorie_risk varchar(11) NOT NULL DEFAULT '0'
);
-------------------------------
**II) perform queries**
The single queries (as mentioned: a) two databases; b) at the moment I server-sided simply iterate over - and within the iterations I do have sub-queries..)
**A) from projectdatabase:**
SELECT * FROM discussionthreads WHERE ID=~url-tid-param LIMIT 1
SELECT * FROM postings WHERE tid=discussionthreads.ID ORDER BY created
SELECT * FROM attachments WHERE id = postings.a1 LIMIT 1
SELECT * FROM attachments WHERE id = postings.a2 LIMIT 1
SELECT * FROM attachments WHERE id = postings.a3 LIMIT 1
SELECT * FROM attachments WHERE id = postings.a4 LIMIT 1
SELECT * FROM attachments WHERE id = postings.a5 LIMIT 1
SELECT * FROM attachments WHERE id = postings.a6 LIMIT 1
SELECT * FROM attachments WHERE id = postings.a7 LIMIT 1
SELECT * FROM attachments WHERE id = postings.a8 LIMIT 1
SELECT * FROM attachments WHERE id = postings.a9 LIMIT 1
SELECT * FROM attachments WHERE id = postings.a10 LIMIT 1
**B) from metadatabase**
SELECT * FROM userprofiles WHERE id = postings.usrsID LIMIT 1
---------
edit2:
- *I am running it via docker / docker-compose - the Dockerfile for the databases:*
FROM mariadb:10.4
*just FYI (may be..): the Containers for the web-app are all built from a Dockerfile starting with:*
FROM php:7.2-apache
...
flowfab
(5 rep)
Apr 13, 2020, 10:41 AM
• Last activity: Apr 19, 2020, 11:27 PM
Showing page 1 of 11 total questions