Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
490
views
How to use SELECT statement to get data from SQlite table using with dictionary values?
I have below dictionary values. Example: dic = {'ID1': ['Name1', 'Name2'], 'ID2': ['Name1', 'Name3'], 'ID3': ['Name2']} I want to print address of every matched names from above dict values into below format. Example Output: ['ID1', 'adress1' ,'adress2'], ['ID2', 'adress1' ,'adress3'], ['ID3', 'adre...
I have below dictionary values.
Example:
dic = {'ID1': ['Name1', 'Name2'],
'ID2': ['Name1', 'Name3'],
'ID3': ['Name2']}
I want to print address of every matched names from above dict values into below format.
Example Output:
['ID1', 'adress1' ,'adress2'],
['ID2', 'adress1' ,'adress3'],
['ID3', 'adress2']
Below is my example code of database table that has "Emp_ID", "Emp_Name", "Emp_Address". The dict values are "Emp_Names", that should be match emp_Names and get the Adresses in to the list as Example out.
import sqlite3
conn=sqlite3.connect('emp.db')
conn.execute("""CREATE TABLE IF NOT EXISTS branch
(Emp_ID TEXT UNIQUE,
Emp_Name TEXT UNIQUE,
Emp_Address TEXT
)""")
conn.execute("""INSERT OR IGNORE INTO branch(Emp_ID, Emp_Name, Emp_Address) \
VALUES('Emp1', 'Name1', 'adress1')""")
conn.execute("""INSERT OR IGNORE INTO branch(Emp_ID, Emp_Name, Emp_Address) \
VALUES('Emp2', 'Name2', 'adress2')""")
conn.execute("""INSERT OR IGNORE INTO branch(Emp_ID, Emp_Name, Emp_Address) \
VALUES('Emp3', 'Name3', 'adress3')""")
conn.execute("""INSERT OR IGNORE INTO branch(Emp_ID, Emp_Name, Emp_Address) \
VALUES('Emp4', 'Name4', 'adress4')""")
conn.execute("""INSERT OR IGNORE INTO branch(Emp_ID, Emp_Name, Emp_Address) \
VALUES('Emp5', 'Name5', 'adress5')""")
conn.commit()
conn.close()
For example 'ID1' (column) should not to be changed.
And 'name1' should take its address and 'name2'should take its address and all matched names should take their addresses. Which statements are useful for my scenario. Is it possible? If possible how to to?
Mano
(51 rep)
Oct 30, 2020, 06:05 AM
• Last activity: Jul 31, 2025, 04:09 AM
0
votes
1
answers
143
views
accelerating a SQL quadruple self-join with a complex alternation in the WHERE clause
The following Sqlite query has a triple self-join on table t: SELECT "update_variable", lhs_acc.name_suffix || ":" || rhs_var.name_suffix, op.span, op.path FROM t op JOIN t lhs_acc ON (lhs_acc.path GLOB op.path || "?*") JOIN t rhs_acc ON (rhs_acc.path GLOB op.path || "?*") JOIN t rhs_var ON (rhs_var...
The following Sqlite query has a triple self-join on table t:
SELECT "update_variable",
lhs_acc.name_suffix || ":" || rhs_var.name_suffix,
op.span,
op.path
FROM t op
JOIN t lhs_acc ON (lhs_acc.path GLOB op.path || "?*")
JOIN t rhs_acc ON (rhs_acc.path GLOB op.path || "?*")
JOIN t rhs_var ON (rhs_var.path GLOB op.path || "?*")
WHERE (op.name_prefix = "assignment"
AND lhs_acc.name_prefix = "assignment_lhs_identifier"
AND rhs_acc.name_prefix = "assignment_rhs_atom"
AND rhs_var.name_prefix = "assignment_rhs_atom"
AND rhs_acc.name_suffix != rhs_var.name_suffix
AND lhs_acc.name_suffix = rhs_acc.name_suffix)
OR (op.name_prefix = "augmented_assignment"
AND lhs_acc.name_prefix = "assignment_lhs_identifier"
AND rhs_acc.name_prefix = "assignment_rhs_atom"
AND rhs_var.name_prefix = "assignment_rhs_atom")
OR (op.name_prefix = "method_call"
AND rhs_acc.name_prefix = "method_call_name"
AND lhs_acc.name_prefix = "method_call_object"
AND rhs_var.name_prefix = "call_argument"
AND rhs_acc.name_suffix != rhs_var.name_suffix
AND rhs_acc.name_suffix REGEXP "(append|extend|insert|add|update)$")
GROUP BY op.span,
lhs_acc.name,
rhs_var.name
It is far too slow for my application. I can accelerate it enough by restricting each instance of t to the lines I am interested in, which somehow duplicates the WHERE conditions. So, the FROM clause becomes:
FROM
(SELECT *
FROM t
WHERE t.name_prefix IN ("assignment",
"augmented_assignment",
"method_call")) op
JOIN
(SELECT *
FROM t
WHERE t.name_prefix IN ("assignment_lhs_identifier",
"method_call_object") ) lhs_acc ON (lhs_acc.path GLOB op.path || "?*")
JOIN
(SELECT *
FROM t
WHERE t.name_prefix IN ("assignment_rhs_atom",
"method_call_name") ) rhs_acc ON (rhs_acc.path GLOB op.path || "?*")
JOIN
(SELECT *
FROM t
WHERE t.name_prefix IN ("assignment_rhs_atom",
"call_argument") ) rhs_var ON (rhs_var.path GLOB op.path || "?*")
Is there some technique which would lead to a less verbose / ugly / redundant query ?
---
Edit: The test duplication (but not the ugliness) can be avoided as follows:
SELECT "update_variable",
lhs_acc.name_suffix || ":" || rhs_var.name_suffix,
op.span,
op.path
FROM
(SELECT *
FROM t
WHERE t.name_prefix IN ("assignment",
"augmented_assignment",
"method_call")) op
JOIN t lhs_acc ON (lhs_acc.name_prefix = (CASE op.name_prefix
WHEN "method_call" THEN "method_call_object"
ELSE "assignment_lhs_identifier"
END)
AND (lhs_acc.path GLOB op.path || "?*"))
JOIN t rhs_acc ON (rhs_acc.name_prefix = (CASE op.name_prefix
WHEN "method_call" THEN "method_call_name"
ELSE "assignment_rhs_atom"
END)
AND (rhs_acc.path GLOB op.path || "?*"))
JOIN t rhs_var ON (rhs_var.name_prefix = (CASE op.name_prefix
WHEN "method_call" THEN "call_argument"
ELSE "assignment_rhs_atom"
END)
AND (rhs_var.path GLOB op.path || "?*"))
WHERE op.name_prefix = "augmented_assignment"
OR (op.name_prefix = "assignment"
AND lhs_acc.name_suffix = rhs_acc.name_suffix
AND rhs_acc.name_suffix != rhs_var.name_suffix)
OR (op.name_prefix = "method_call"
AND rhs_acc.name_suffix REGEXP "append|extend|insert|add|update)$"
AND rhs_acc.name_suffix != rhs_var.name_suffix)
GROUP BY op.span,
lhs_acc.name,
rhs_var.name
Aristide
(121 rep)
Feb 1, 2020, 12:59 PM
• Last activity: Jul 26, 2025, 12:02 PM
0
votes
1
answers
636
views
How should I shift values to one place back in SqLite?
I have a SqLite DB in which i have 16 columns as id, and f1 to f15. Now Mistakenly I entered data from 2 column, leaving the first one empty. Now my all values are one place ahead i.e value of last column of first row is in 1st column of 2nd row. Now I wish to correct this so that all values at thei...
I have a SqLite DB in which i have 16 columns as id, and f1 to f15.
Now Mistakenly I entered data from 2 column, leaving the first one empty. Now my all values are one place ahead i.e value of last column of first row is in 1st column of 2nd row.
Now I wish to correct this so that all values at their right place . Any query for this problem?
PRADHYUMN SHARMA
(9 rep)
Nov 3, 2020, 03:32 AM
• Last activity: Jul 23, 2025, 05:03 AM
0
votes
1
answers
161
views
Alternative to multi-master replication between local and cloud db when writes are infrequent and only 1 db at a time
**Background:** I have a closed-source app on my laptop that saves its data in a local SQLite database. I'm working on creating a mobile app for myself that would replicate the functionality of the desktop app, but run on mobile, consequently necessitating a Cloud database, for example any of the GC...
**Background:**
I have a closed-source app on my laptop that saves its data in a local SQLite database. I'm working on creating a mobile app for myself that would replicate the functionality of the desktop app, but run on mobile, consequently necessitating a Cloud database, for example any of the GCP SQL offerings
**Constraints**:
I will be the sole user of the application, so the DB writes will be very infrequent. Moreover, it can be guaranteed that no writes will happen simultaneously to the local and cloud DBs. The access pattern is more like:
1. Do some work on the laptop, work saved to local db
2. Sync data to cloud after app closes
3. Open app on phone sometime later, read and write to cloud db
4. Open laptop sometime later, get updates from cloud into local
5. Repeat
**Issue:**
Data needs to be eventually consistent between the mobile and the desktop app, i.e. the local SQLite and the cloud DB. I've read about multi-master replication, but this seems like an overkill since only one database is accessed at a time, and I feel that some simpler solution might fit the bill here. How could I solve this challenge?
**Unsubstantiated idea?**: Perhaps it would be possible to emit a Pub/Sub event on writes to either db and have a serverless function listen on local write events, replicating them to the cloud db, and a daemon on the laptop, replicating the cloud writes to local. Would something like this work?
mikemykhaylov
(1 rep)
Mar 13, 2024, 11:20 PM
• Last activity: Jul 13, 2025, 03:03 PM
1
votes
1
answers
179
views
storing histogram in sqlite database
I am not a database administrator, but only a scientist who would appreciate your help in solving my issue with storing histogram data in an SQLite database. I have several of them to be stored and to be later analysed with pandas (python). Each histogram is made by two arrays, 1. one for the bins o...
I am not a database administrator, but only a scientist who would appreciate your help in solving my issue with storing histogram data in an SQLite database.
I have several of them to be stored and to be later analysed with pandas (python).
Each histogram is made by two arrays,
1. one for the bins or buckets that are regularly spaced, let's say from min to max with a given step.
2. one for the values.
First question: how would you store the two arrays? They are rather long, up to 65k. I don't need to store the bin values, I can in principle recalculate them having the min, max and step. The value array may have several zeros, so it may be convenient to store them sparsely.
Second question: I would like to retrieve them with a select returning something like:
bin1, value1
bin2, value2
...
binN, valueN
Sorry if my questions is looking too stupid to you, but I'm scratching my head with this problem since too long without finding any way out.
Thanks in advance for your help!
## Update
As a preliminary, not really disk space effective solution, I have implemented something like the suggestion of @Whitel Owl. Instead of storing the two arrays as text, I'm storing them as binary BLOBs.
HEre is my code:
CREATE TABLE HistogramTable (
HistogramID as INTEGER PRIMARY KEY,
ImageID as INTEGER,
Bins as BLOB,
Histo as BLOB,
FOREIGN KEY ImageID REFERENCE ImageTable(ImageID)
);
To get the two blobs I'm using pickle.
import pickle
import sqlite3
import numpy as np
db.connect('mydb.db')
histo, bins = np.histogram(data)
histo_blob = sqlite3.Binary(pickle.dumps(histo))
bins_blob = sqlite3.Binary(pickle.dumps(bins))
user41796
(111 rep)
Jun 28, 2023, 08:35 PM
• Last activity: Jul 5, 2025, 07:06 AM
0
votes
2
answers
182
views
Database design question: Tags with values(duration) linked to them
I am creating a database that takes in daily inputs such as: - Date - Events in that date (e.g. project foo: 5 hours, bar: 2 hours, etc...) - Rating of the date There's more but these are probably the only relevant inputs regarding the problem. The problem is how do I construct my tables so that I h...
I am creating a database that takes in daily inputs such as:
- Date
- Events in that date (e.g. project foo: 5 hours, bar: 2 hours, etc...)
- Rating of the date
There's more but these are probably the only relevant inputs regarding the problem.
The problem is how do I construct my tables so that I have stored somehow the specific duration an event lasts. Currently what I have is:
- A users table storing user log in credentials (1)
- A rating table to store input date, rating of the date, etc... (2)
- A tags table storing unique tags (3)
I will use the tags and ratings to look for possible patterns in the future in case that's relevant.
I don't find it difficult at all if I just had to deal with tags alone related to specific dates, as I would just have to have an unique table of tags and associate the tag_id with a date. But I have no clue how I store the duration of a tag on a specific day efficiently. I could of course only use a single table, accept redundancy with overlapping tags on different days and store tags, events and duration in a single string field. But that doesn't seem efficient to me.
I am very novice when it comes to databases and I appreciate the help.
I am using Python 3.7.2, SQLite and Flask-SQLAlchemy 2.4.0 to construct the tables.
Like so:
(1)
class User(db.Model):
__tablename__ = 'user'
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False)
email = db.Column(db.String(120), unique=True, nullable=False)
password_hash = db.Column(db.String(128))
(2)
class Rating(db.Model):
__tablename__ = 'rating'
id = db.Column(db.Integer, primary_key=True)
date = db.Column(db.Date, unique=False, nullable=False)
rating_day = db.Column(db.Integer, nullable=False)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False)
# etc...
(3)
class Tags(db.Model):
__tablename__ = 'tags'
id = db.Column(db.Integer, primary_key=True)
tag = db.Column(db.Integer, unique=True, nullable=False)
Librarian
(9 rep)
Jul 24, 2019, 04:32 PM
• Last activity: Jun 30, 2025, 08:02 PM
0
votes
1
answers
210
views
Is there a way of securing ProxySQL's own SQLite3 database?
As I understand **ProxySQL** has support for *SSL encryption* for inbound and outbound traffic (front/backend) and supports *password hashing*. However back in 2017, one had direct database access: https://dba.stackexchange.com/questions/164705/how-to-easily-bring-80-mysql-users-into-proxysql/212991...
As I understand **ProxySQL** has support for *SSL encryption* for inbound and outbound traffic (front/backend) and supports *password hashing*. However back in 2017, one had direct database access:
https://dba.stackexchange.com/questions/164705/how-to-easily-bring-80-mysql-users-into-proxysql/212991
According to *Icarus*'s answer, it would be rather easy for someone who broke into the hosting server to gain access to all endpoints listed on this database by making a select call directly to it.
Is there any way of encrypting the SQLite3 DB? Or perhaps of encrypting the data stored in the database (similar to how SSL is used for in-transit data)? Basically looking for *data-at-rest* securing strategies.
Perhaps this is already implemented in recent versions? I could not find any documentation supporting it, but it did seem that may be the case:
https://github.com/sysown/proxysql/blob/v2.0.13/lib/ProxySQL_Admin.cpp
Which, if I understood correctly implements SHA1 encryption:
https://stackoverflow.com/questions/3179021/sha1-hashing-in-sqlite-how
If this is correct, is this restricted to password hashing? Or is it part of the SSL encryption implementation? Or is there actually a way of enabling encrypted "DISK" read/writes? Is this by any chance implemented by default on v2+ builds?
Ultimately, I plan on encrypting the volume this is installed in, and have a second proxy layer so that I don't expose endpoint information; this would be an additional layer of protection.
Thanks, I appreciate any insights you may have!
Eduardo A del Corral Lira
(11 rep)
Jun 22, 2020, 02:13 PM
• Last activity: Jun 26, 2025, 07:06 AM
1
votes
1
answers
226
views
Equivalent to BIT_OR function in SQLite?
I have code that runs against a MySQL database but the unit tests for the code use SQLite. The issue is one code method uses a SQL query which aggregates using the `BIT_OR` function of MySQL. Is there an equivalent or a way to replicate its functionality in SQLite?
I have code that runs against a MySQL database but the unit tests for the code use SQLite.
The issue is one code method uses a SQL query which aggregates using the
BIT_OR
function of MySQL. Is there an equivalent or a way to replicate its functionality in SQLite?
Nick Gotch
(163 rep)
Oct 7, 2020, 04:33 PM
• Last activity: Jun 13, 2025, 06:11 PM
2
votes
1
answers
448
views
SQLite Internals - Records
Hey I'm trying to wrap my head around SQLite data storage, specifically how it is storing Records. I've found a book [The Definitive guide to SQLlite](https://link.springer.com/book/10.1007/978-1-4302-3226-1), where the author explains internal record format (Figure 9-5, page 351): Given table: ```s...
Hey I'm trying to wrap my head around SQLite data storage, specifically how it is storing Records. I've found a book [The Definitive guide to SQLlite](https://link.springer.com/book/10.1007/978-1-4302-3226-1) , where the author explains internal record format (Figure 9-5, page 351):
Given table:
sqlite> SELECT * FROM episodes ORDER BY id LIMIT 1;
id season name
--- ------ --------------------
0 1 Good News Bad News
Its internal record format is:
| 04 | 01 | 01 | 49 | | 00 | 01 | Good News Bad News |
> "The header is 4 bytes long. The header size reflects this and itself is encoded as a single byte. The first type, corresponding to the id field, is a 1-byte signed integer. The second type, corresponding to the season field, is as well. The name type entry is an odd number, meaning it is a text value. Its size is therefore given by (49-13)/2=18 bytes."
Specifically I'm curious about TEXT
attribute, in the example above we have a string of length 18 characters. And the rule for TEXT
in SQLite is as follows:
Type Value Meaning Length of Data
---------- ------- --------------------
N>13 and odd TEXT (N-13)/2
What hapenns though when the string is longer ? It'll get out of range of that one byte.
Matis
(121 rep)
Mar 5, 2020, 04:13 PM
• Last activity: Jun 6, 2025, 05:08 PM
5
votes
1
answers
2949
views
Setting up cloud database for syncing to local SQLite database
**Background** I have a SQL Server database hosted on AWS RDS and there are web applications and WEB APIs that talk to the database. The database is a multi-tenant database and we are currently using SQL Server 2014 although we can upgrade if required. A third-party developed a local client applicat...
**Background**
I have a SQL Server database hosted on AWS RDS and there are web applications and WEB APIs that talk to the database. The database is a multi-tenant database and we are currently using SQL Server 2014 although we can upgrade if required.
A third-party developed a local client application on our behalf which has it's own SQLite database. This application is developed in Xamarin so it runs on Windows, iOS and Android. The local SQLite database must be kept in sync with the cloud database. Syncing data up to the cloud database is not a problem, but syncing data down is causing us issues. Currently we sync data down to the local database by asking the WEB API, every minute, to return all changes that have occurred since a particular date. The cloud database has a
DateCreated
, DateModified
and DateDeleted
column in every table and these columns are queried to see what data has changed since the last time the client synced data. The local application records the last successful sync date for each table.
**Problem**
This approach worked when there were few local clients and few tables to sync but as our client base has grown this approach doesn't seem scalable. We are running into performance issues on our cloud database and a lot of the time the sync-down tasks are cancelled due to timeouts or take ages to run. Our customers are complaining about the time it takes for changes they make on the cloud to sync down to the local application.
**Potential Solution**
Having researched various methods of tracking changes on SQL Server I believe that using the built-in Change Tracking feature is a better approach than using the DateCreated
, DateModified
and DateDeleted
columns for tracking changes. What I am not sure about is how best to set this up.
Things to consider:
- Not all columns on the cloud database tables need to sync to the local database - For example, TableA
on the cloud database has 20 columns but its corresponding client TableA
may only have 5
- Not all data relating to a tenant needs to sync to their local database - for example if a record is marked as "inactive" for that tenant it should never be synced locally
- A table on the local database may contain data from two or more tables on the cloud database
- Not all tenants have the local application yet but they will eventually (this may take a year or more to roll out)
What I am thinking of doing is as follows:
- Create a separate database in AWS RDS that exactly matches the local database
- Enable change tracking on this database rather than on the main database
- Use triggers to keep the main database in sync with the new database
- Query the change tracking tables on the new database and return the changes to the local application
- Create a new table to track if data has changed or not for each tenant and table - this way we won't need to query the change tracking tables each minute only to find that nothing has changed
The reason for the second database is to reduce the strain on the main database when clients are trying to sync data down and also keeping the schemas in sync reduces the complexity on the queries when a client requests to sync changes. For example, if a record is marked as "inactive" for the tenant in the main database, but that record has been changed, I don't want to have to filter this record out when the client requests to sync the data down. I would prefer to already have those records filtered out so that they would never exist in the second database at all. Hope that makes sense!
I would very much value your feedback on this approach and please feel free to suggest better ways of doing it. If there is something that is not clear please let me know and I'll update the question!
keithm
(151 rep)
Oct 26, 2020, 10:56 AM
• Last activity: May 31, 2025, 02:04 PM
1
votes
1
answers
256
views
sqlite 3.4 updating table with data from different table - changed with 3.3 - newbie question
All of the other questions on this seem to be before 3.3 when sqlite added an update join feature? Newbie here, so I may even be getting the terminology wrong. I am trying to update the issbn field in table GULLfromLG (currently empty), with data from the issbn field in BIBinfo (field can be empty i...
All of the other questions on this seem to be before 3.3 when sqlite added an update join feature? Newbie here, so I may even be getting the terminology wrong. I am trying to update the issbn field in table GULLfromLG (currently empty), with data from the issbn field in BIBinfo (field can be empty in a given record).
I am using SQLiteStudio. From looking through the documentation, sqlitetutorial, w3resource, and general web searches I believe the following should work. But it doesn't. The processing happens, is declared successful, but all the fields in GULLfromLG are empty. What am I missing?
UPDATE GULLfromLG
SET issbn = (
select issbn
FROM BIBinfo
WHERE bibno = bibno)
I have also tried naming the fields with variations of
`GULLfromLG.issbn
GULLfromLG.bibno
BIBinfo.issbn
BIBinfo.bibno`
Any help would be greatly appreciated!
user223026
(11 rep)
Jan 31, 2021, 03:01 PM
• Last activity: May 19, 2025, 01:12 AM
3
votes
1
answers
1889
views
force SQLite view column type affinity
Using SQLite 3, I have some view columns which are defined with an expression, and so have no type affinity. Is it possible to define them in a way that forces them to have a certain type affinity? e.g. if I have a tables `t1`, `t2` and a view definition `create view v as select coalesce(t1.c1, t2.c...
Using SQLite 3, I have some view columns which are defined with an expression, and so have no type affinity. Is it possible to define them in a way that forces them to have a certain type affinity?
e.g. if I have a tables
t1
, t2
and a view definition
create view v as select coalesce(t1.c1, t2.c1) c1 from t1, t2 where X;
is there some way of giving v1.c1
a type affinity? I've tried casting it
select cast(coalesce(t1.c1, t2.c1) as INTEGER) c1
but that doesn't seem to work.
nik
(223 rep)
Nov 16, 2015, 03:00 AM
• Last activity: May 11, 2025, 05:04 PM
1
votes
1
answers
1868
views
Full text search with json field in SQLite
Is it possible to have full text search in SQLite along using json? In SQLite json is stored as `TEXT` ([JSON1](https://www.sqlite.org/json1.html)). Full text search requires creation of virtual table ([full text](https://www.sqlitetutorial.net/sqlite-full-text-search/)) but I don't know how to conn...
Is it possible to have full text search in SQLite along using json?
In SQLite json is stored as
TEXT
([JSON1](https://www.sqlite.org/json1.html)) .
Full text search requires creation of virtual table ([full text](https://www.sqlitetutorial.net/sqlite-full-text-search/)) but I don't know how to connect those two extensions so that full text search wouldn't search in json field names.
WRITING (ID, FK_PERSON_ID, BOOK_NAME, TEXT_JSON)
PEOPLE (PERSON_ID, NAME)
additionally TEXT_JSON
for certain person contains following json objects:
-- WRITING.ID = 1, WRITING.FK_PERSON_ID = 1, BOOK_NAME = "NAME_1"
{
"chapter1": "Title1",
"text1": "This is sample sentence with word text",
"text2": "This is sample sentence with word text"
}
-- WRITING.ID = 2, WRITING.FK_PERSON_ID = 1, BOOK_NAME = "NAME_101"
{
"chapter1": "Title2",
"text1": "This is sample sentence without"
}
(the structure of json object can differ)
How should I setup virtual table for full text search to search single person writings BOOK_NAME
and values of all TEXT_JSON
attributes? Searching word text
in writings of PERSON_ID = 1
would return only WRITING.ID = 1
.
SundayProgrammer
(11 rep)
May 3, 2020, 12:27 PM
• Last activity: May 10, 2025, 09:03 PM
0
votes
1
answers
46
views
What happens if I insert a row in the middle of an sqlite table `WITHOUT ROWID`?
`WITHOUT ROWID` is an sqlite optimization. Doc states, it is making the primary key of the table to a clustered index. As far I know, clustered index means that the physical ordering of the rows will follow the index, making linear scans effective on it. However, what will happen if I insert a row i...
WITHOUT ROWID
is an sqlite optimization. Doc states, it is making the primary key of the table to a clustered index.
As far I know, clustered index means that the physical ordering of the rows will follow the index, making linear scans effective on it.
However, what will happen if I insert a row into the middle of such a table? Will it be really needed to shift the whole table file after that?
peterh
(2137 rep)
May 4, 2025, 08:14 AM
• Last activity: May 4, 2025, 02:15 PM
1
votes
1
answers
41
views
How to turn on the expanded mode in SQLite 3?
After asking you [how to turn on the expanded mode in MySQL][0] and [how to turn on the expanded mode in Oracle SQL*Plus][1], I come to you once again and my question is **how to turn on the expanded mode in SQLite 3**? [0]: https://serverfault.com/questions/618642/ [1]: https://dba.stackexchange.co...
After asking you how to turn on the expanded mode in MySQL and how to turn on the expanded mode in Oracle SQL*Plus , I come to you once again and my question is **how to turn on the expanded mode in SQLite 3**?
48347
(108 rep)
May 3, 2025, 03:25 PM
• Last activity: May 3, 2025, 04:53 PM
4
votes
2
answers
2698
views
Column length error querying SQLite via SQL Server Linked Server
I am attempting to query SQLite to copy data into corresponding tables in SQL Server. This is the first stage of an ETL process I'm putting together. Windows 10 Pro, SQL Server 2017 Developer Edition, SQLite 3.30.1 (installed via Chocolatey) I have created a 64-bit system DSN for the SQLite database...
I am attempting to query SQLite to copy data into corresponding tables in SQL Server. This is the first stage of an ETL process I'm putting together.
Windows 10 Pro, SQL Server 2017 Developer Edition, SQLite 3.30.1 (installed via Chocolatey)
I have created a 64-bit system DSN for the SQLite database, created a Linked Server named
NJ
which points to it, and I can successfully query _most_ tables, both via OPENQUERY
and 4-part naming ([after setting LevelZeroOnly
for the MSADASQL provider](http://sparkalyn.com/2008/12/invalid-schema-error/)) . One table consistently throws out an error.
The table definition in SQLite:
CREATE TABLE LogMemo (lParent ,lLogId integer, lText default "");
Querying from within SQLite works.
sqlite> select lparent,llogid,lText from [LogMemo] order by lparent desc limit 4;
GCZZ2Q|834111942|Found it
GCZZ2Q|834111838|Tftc!
GCZZ2Q|833813811|On a quick girls getaway but first let me grab a cache. We pulled over by GZ, I didn't look for long before making the find. I signed the log and replaced the cache as found. TFTC
GCZZ2Q|833807936|Crossed the Delaware Bay on the Cape May- Lewes Ferry (the New Jersey) with Lambmo, dukemom1, and TBurket. We had a wonderful trip, found 19 new and interesting caches, and introduced TBurket to this great adventure.
Ferry nice view was the first for the day, T's first find, and first NJ cache for Lambmo and dukemom. Yes, it is a nice view of the ferry.
Querying this table via the Linked Server returns the following error:
>Msg 7347, Level 16, State 1, Line 15
>OLE DB provider 'MSDASQL' for linked server 'nj' returned data that does not match expected data length for column '[nj]...[logmemo].lText'. The (maximum) expected data length is 510, while the returned data length is 2582.
Thinking it was a problem with the long text field, I tried to give some hints about how much data should be expected coming back. I have tried the following queries:
select top 4 lparent,llogid,cast(ltext as nvarchar(4000)) as ltext from nj...logmemo order by lparent desc;
select top 4 lparent,llogid,substring(ltext,1,4000) as ltext from nj...logmemo order by lparent desc;
select top 4 lparent,llogid,substring(ltext,1,20) as ltext from nj...logmemo order by lparent desc;
select top 4 lparent,llogid,substring(ltext,1,200) as ltext from nj...logmemo order by lparent desc;
select top 4 lparent,llogid,ltext from nj...logmemo order by lparent desc;
All result in the same error. So I tried using OPENQUERY
instead:
SELECT top 4 * FROM OPENQUERY([NJ], 'select lparent,llogid,cast(ltext as varchar(20)) as ltext from [LogMemo] order by lparent desc limit 4')
SELECT top 4 * FROM OPENQUERY([NJ], 'select lparent,llogid,cast(ltext as varchar(4000)) as ltext from [LogMemo] order by lparent desc limit 4')
SELECT top 4 * FROM OPENQUERY([NJ], 'select lparent,llogid,substr(ltext,1,8000) as lText from [LogMemo] order by lparent desc limit 4')
SELECT top 4 * FROM OPENQUERY([NJ], 'select lparent,llogid,substr(ltext,1,200) as lText from [LogMemo] order by lparent desc limit 4')
The first three of these four queries return the first 3 of the expected 4 results, then the same error is thrown, with the exception that the reported returned data length is 728, not 2582. Note that the length of the long text associated with the last record in the original result set is 362 characters, which is 724 bytes (if we assume nvarchar
).
The last query doesn't throw an error, but I only get the first 200 characters of the value in lText
.
So, the question becomes...how can I extract the full text from this field in SQLite so I can insert it into my SQL Server table?
* Is there a limit to the size of data that can be returned for one field via this method/driver?
* Is there another setting I'm missing somewhere, or an extra parameter for OPENQUERY
?
* Should I be looking at OPENROWSET
instead?
I'm close to abandoning this angle entirely and just dumping the table data to CSV from SQLite and bulk-importing it into SQL Server.
Edit in response to one comment:
SELECT LEN(ltext) FROM nj...logmemo ORDER BY LEN(ltext) DESC;
Results in an error:
>Msg 7347, Level 16, State 1, Line 24
OLE DB provider 'MSDASQL' for linked server 'nj' returned data that does not match expected data length for column '[nj]...[logmemo].lText'. The (maximum) expected data length is 510, while the returned data length is 2582.
Doing similar with OPENQUERY
:
select * from OPENQUERY([NJ], 'select length(ltext) as lText from [LogMemo] order by length(ltext)')
>Msg 7356, Level 16, State 1, Line 26
The OLE DB provider "MSDASQL" for linked server "NJ" supplied inconsistent metadata for a column. The column "lText" (compile-time ordinal 1) of object "select length(ltext) as lText from [LogMemo] order by length(ltext)" was reported to have a "DBTYPE" of 130 at compile time and 3 at run time.
alroc
(1694 rep)
Jan 12, 2020, 08:31 PM
• Last activity: Apr 21, 2025, 09:03 PM
1
votes
1
answers
101
views
How to know, when it is time to vacuum an sqlite database file?
Sqlite, contrary most SQL-compatible DB engine, is working directly on files instead over network sockets. It is also not very good in concurrency, beside that deletion operators are mostly advisory (i.e. do not free the places of the deleted entities). That results that an sqlite database needs reg...
Sqlite, contrary most SQL-compatible DB engine, is working directly on files instead over network sockets. It is also not very good in concurrency, beside that deletion operators are mostly advisory (i.e. do not free the places of the deleted entities).
That results that an sqlite database needs regularly be
VACUUM
-ed, at least those with regular modify or delete operations. Vacuum works by rebuilding the database file, without the free spaces in it. Needless to say, it is absolutely non-concurrent with anything, making it to a costly operation - we have downtime for the vacuum.
*However, I think there should exist some indicator which compares the size of the database file to the size of the actual data in it.*
Using this indicator, it could be said, do we need a vacuum-ing or not.
Does it exist?
peterh
(2137 rep)
Apr 11, 2025, 10:16 AM
• Last activity: Apr 15, 2025, 11:40 AM
1
votes
1
answers
1276
views
SQLite database Upgrade Schema without loosing data
I have a small C# Windows Forms application. It uses SQLite DB on the client's PC. I started with a Schema DB and with time I added more tables and columns to my local DB (not the client's). I want to upgrade the client's DB Schema without losing data. So far I could update the schema but some infor...
I have a small C# Windows Forms application.
It uses SQLite DB on the client's PC.
I started with a Schema DB and with time I added more tables and columns to my local DB (not the client's).
I want to upgrade the client's DB Schema without losing data.
So far I could update the schema but some information is lost because of drop tables and recreating them.
Can anyone help me?
This is an example of 2 tables. I just add new columns.
No easy diagram generator for SQLite.
Add **** for added columns
**Table Usuarios** CLIENT
UserId Integer
NombreUsuario TEXT
Password TEXT
ProfesionID Integer
Mail TEXT
**Table Usuarios NEW VERSION**
UserId Integer
NombreUsuario TEXT
Password TEXT
ProfesionID Integer
Mail TEXT
UseImages Integer ****
Color Integer ****
BackUpDate TEXT ****
**Table EquiposHabilitados CLIENT**
EquipoID TEXT
**Table EquiposHabilitados NEW VERSION**
Equipo TEXT ****
EquipoID TEXT
EquipoFechaVencimiento TEXT ****
UltimoTiempo TEXT ****
Guillermo
(23 rep)
Mar 24, 2022, 09:39 PM
• Last activity: Apr 9, 2025, 08:04 AM
3
votes
1
answers
2153
views
Are there any known benchmarks/statistics on UnQLite?
I came across UnQLite the other day in [Stack Overflow][1] and it has me intrigued for some software I'm working on. However, very few tests have been published on how well it performs. [There are plenty][2] of [questions][3] and [public][4] data available that help on SQLite and others but nothing...
I came across UnQLite the other day in Stack Overflow and it has me intrigued for some software I'm working on. However, very few tests have been published on how well it performs. There are plenty of questions and
public data available that help on SQLite and others but nothing on UnQLite. (I'm looking at serverless databases, hence my mention of SQLite. I understand that SQLite is relational whereas UnQLite is KVP and the two are not forked from the same parent projects)
I'd like to know:
- What UnQLite seems to achieve in read/write speeds
- What types of reads and writes (sequential, synchronous, large data, batch, random, etc.) it does well at compared to some other databases
- Where bottlenecking seems to occur
- How large the database tends to get (disk size or rows) before the time it takes to find and extract data begins to see a significant increase (i.e. slows down)
- If at all possible, what disk format(s) (ext3/4, NTFS, exFAT, HFS+) cause problems for the database
- Any recommended ways of optimizing performance with it
Thank you.
Kamikaze Rusher
(131 rep)
Apr 16, 2015, 02:12 PM
• Last activity: Apr 1, 2025, 04:14 AM
0
votes
1
answers
55
views
Sqlite count while also grouping?
select * from tblA a left join tblB b on b.id = a.bId group by a.bId If I group based on a column, I get only the first record of the records that have same `bID` value. Is there a way to also get the total number of records that have same `bId`, for each row? Without doing a separate query... id |...
select * from tblA a
left join tblB b on b.id = a.bId
group by a.bId
If I group based on a column, I get only the first record of the records that have same
bID
value. Is there a way to also get the total number of records that have same bId
, for each row? Without doing a separate query...
id | bId
--------
1 | 1
2 | 1
3 | 1
4 | 2
5 | 3
so in the above query I would expect
id | bId. | counts
-------------------
1 | 1 | 3 <- 3 records with bId = 1
4 | 2 | 1
5 | 3 | 1
Alex
(181 rep)
Mar 24, 2025, 01:29 PM
• Last activity: Mar 24, 2025, 03:08 PM
Showing page 1 of 20 total questions