Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
716 views
Postgres too Many References to Single Table is good or bad design
*I have a Postgres database schema for a web app that needs real-time data and hence a huge volume of select, update, insert queries. Data manipulation is done via Postgres stored procedure mostly* **I have designed each table to have {created, updated, deleted}_by user all points to user table Id....
*I have a Postgres database schema for a web app that needs real-time data and hence a huge volume of select, update, insert queries. Data manipulation is done via Postgres stored procedure mostly* **I have designed each table to have {created, updated, deleted}_by user all points to user table Id. In total, I will have around ~100-150 such references.** In the users table, I see 100+ lines in reference by TABLE "XXX" CONSTRAINT "XXXXX" FOREIGN KEY (updated_user_id) REFERENCES users(id) ON DELETE SET NULL TABLE "XXY" CONSTRAINT "XXY" FOREIGN KEY (created_user_id) REFERENCES users(id) ... ... ...~150+ I wanted to know if it's a good design or not and are there any pros and cons of such a database design. Are there any pros and cons of it I would really love to hear from all? Note my max data size/year will be < ~50GB/year Note I have these indexes on users table Indexes: "id_x" PRIMARY KEY, btree (id) "ABC" btree (id, username, email) "XYZ" UNIQUE, btree (lower(email::text)) "aaa" UNIQUE, btree (lower(username::text)) "bbb" UNIQUE CONSTRAINT, btree (email) "cccc" UNIQUE CONSTRAINT, btree (username)
Smarhacker (11 rep)
Mar 7, 2022, 06:06 PM • Last activity: Mar 7, 2022, 10:25 PM
-1 votes
1 answers
41 views
Schema required to find a record in a table with null values
I have the following table. | ID | Country | City | Street | No | |----|---------|------|--------|----| | 1 | UK | | | 2 | | 2 | | Lon | | | | 3 | | | Oxf | 19 | | 4 | UK | Glas | | | | 5 | US | NY | Wall | | | 6 | US | NY | | 14 | | 7 | | | | 5 | In this table, I have defined some address rules. Fo...
I have the following table. | ID | Country | City | Street | No | |----|---------|------|--------|----| | 1 | UK | | | 2 | | 2 | | Lon | | | | 3 | | | Oxf | 19 | | 4 | UK | Glas | | | | 5 | US | NY | Wall | | | 6 | US | NY | | 14 | | 7 | | | | 5 | In this table, I have defined some address rules. For example. 1. If Country is UK and house number is 2: The value of City and Street does not matter 2. If City is London: The value of Country, Street & No can be anything. 3. The empty cells are NULL and that means they will match any value. Now, I have an address with the following properties:
{
    "Country": Brazil,
    "City": Rio,
    "Street": Aven,
    "No": 5
}
I need to find the rules, that this object matches with. The above address, will match the last rule, because No is 5. Now my question: is the above table structure suitable for my purpose? When searching for a record in the rules, I have to ignore the null values for all columns, so I think that will make the query inefficient. Also, in the above table, what index should be used to cover all columns and conditions? Is it possible to have an efficient index to cover everything? Another concern that I have is redundancy. As you can see, the Country, City and even Street could be inserted repeatedly. Is it possible to reduce redundancy here? What's the recommended schema to store and search in such data? I am using PostgreSQL. Also, note that these data are read-intensive. The rules are defined once in a while, but searching happens a lot.
Meysam (101 rep)
Feb 25, 2022, 07:50 AM • Last activity: Feb 25, 2022, 02:27 PM
0 votes
0 answers
548 views
Is a columnstore index appropriate when all columns are (mostly) unique?
I have a longish table ( 60M rows ) where there are only 7 columns. One is a unique ID, two are datetimes, and two are notes and descriptions. There the notes and descriptions are very regular, except for a tag at the end of the text. So, they're technically unique. I can't take that unique tag out...
I have a longish table ( 60M rows ) where there are only 7 columns. One is a unique ID, two are datetimes, and two are notes and descriptions. There the notes and descriptions are very regular, except for a tag at the end of the text. So, they're technically unique. I can't take that unique tag out as the tags are signatures and these notes and descriptions are legal documents. If it wasn't for those tags, they'd be 95% from stock descriptions - maybe 15 variations. These descriptions are up to 8K chars long. I long for some reasonable compression, and am considering a clustered columnstore index to implement that compression, but I'm unclear as to whether the compression will even occur w/ these columns being tagged into uniqueness. These descriptive columns comprise more than 90% of the row data. So...rowstore indexing is appropriate for the 'key' column...but I'm wondering if I should define this after the clustered columnstore index. Current: create table dbo.SPECIMEN ( ID int not null, Specimen_Types varchar(255) null, Collected_Date datetime null, Received_Date datetime null, Results varchar(4000) null, Notes varchar(max) null, Lab_Report_ID int not null ) go create clustered index [SPECIMEN.ID.Lab_Report_ID.Fake.PrimaryKey] on dbo.SPECIMEN(ID,Lab_Report_ID); go create index [SPECIMEN.Lab_Report_ID.Index] on dbo.SPECIMEN(Lab_Report_ID); ...and I if I get good compression, I'd change the indexes this way: go create clustered columnstore index [SPECIMEN.CCI] on dbo.SPECIMEN; go create index [SPECIMEN.ID.Lab_Report_ID.Fake.PrimaryKey] on dbo.SPECIMEN(ID,Lab_Report_ID); go create index [SPECIMEN.Lab_Report_ID.Index] on dbo.SPECIMEN(Lab_Report_ID); Does this make sense? I have very little experience with columnstore indexing and don't want to step on my own foot. BTW - fake primary key. It is supposed to be unique, but the app that populates the source data occasionally throws in a duplicate. This table is supposed to be an extract from that semi-stable source.
Clay (101 rep)
Jan 22, 2022, 03:36 PM
0 votes
1 answers
1298 views
Should I use composite key or primary key from other table
We have a Education Project which has following entities : - Domain (e.g. Programming, UI/UX, AI, ML), each Domain has 5 Levels (1, 2, 3, 4, 5) - Building Blocks - which are like small topics e.g. java, multi-threading, loops, prototyping, user interviews. Each Level in the Domain is build of multip...
We have a Education Project which has following entities : - Domain (e.g. Programming, UI/UX, AI, ML), each Domain has 5 Levels (1, 2, 3, 4, 5) - Building Blocks - which are like small topics e.g. java, multi-threading, loops, prototyping, user interviews. Each Level in the Domain is build of multiple Building Blocks. - Learning Asset (is like a link to study a concept - it can be associated with multiple Building Blocks) and further these Learning Assets are mapped to a particular Domain -> Level -> Building Block These are the tables that we have thought of : Domain | id | Name | | 10 | UIUX | | 11 | Programming | | 12 | AI | Building Blocks | id | Name | | 1 | loops | | 2 | multi-threading | | 3 | user-interview | Then we store mapping of Building Blocks to a Domain - Level Domain-Level-BuildingBlocks Mapping Table | DLB_Id | domainId | level | buildingBlockId | 100 | 11 | 1 | 1 | 200 | 11 | 2 | 2 | 300 | 10 | 1 | 3 in this table - (domainId, level, buildingBlockId) form a composite key Learning Asset Table | id | Name | link | 1 | Loop Notes | https://a.com | 2 | Operators | https://b.com | 3 | Process and Threads | https://c.com A Learning Asset can be connected to multiple building blocks Learning Asset -BuildingBlocks Mapping Table | id | learningAssetId | buildingBlockId | 1 | 1 | 1 | 2 | 2 | 1 | 3 | 3 | 2 Now the Admin can select if a Learning Asset is applicable in Domain-Level-BuildingBlock combination so Learning Asset-Domain-Level-BuildingBlocks Mapping Table **(Table A)** | id | learningAssetId | domainId | level | buildingBlockId | 1 | 1 | 11 | 1 | 1 | 2 | 2 | 11 | 1 | 1 in this table - (domainId, level, buildingBlockId) form a composite key My question was in the above table : should I again store **(domainId, level, buildingBlockId)** or should I use their primary key **DLB_id** from the **Domain-Level-BuildingBlocks table** something like this : Learning Asset-Domain-Level-BuildingBlocks Mapping Table **(Table B)** | id | learningAssetId | DLB_Id | 1 | 1 | 100 | 2 | 3 | 200 1. My question is whether to use Table A or Table B 2. If Table B is the correct way should I generate DLB_Id as string by combining domainId + "-" + level + "-" + buildingBlockId instead of using Auto-increment Integet Primary Keys. Will the indexing on the generated string be as efficient as the Auto-increment Integer Primary Key. The reason for generating the string is that when we need to fetch the Learning Assets that belong to a Domain-Level-BuildingBlock combination I don't need to use the Domain-Level-BuildingBlocks Mapping Table - rather I can directly look up the generated string Id in the Learning Asset-Domain-Level-BuildingBlocks Mapping Table Learning Asset-Domain-Level-BuildingBlocks Mapping Table **(Table C)** | id | learningAssetId | DLB_Id (as generated string) | 1 | 1 | '11-1-1' | 2 | 3 | '11-2-2' i.e. Table B or Table C ? Thank you
j10 (309 rep)
Jul 25, 2021, 07:41 AM • Last activity: Jul 25, 2021, 04:17 PM
1 votes
1 answers
631 views
Hundreds of millions of rows: Index by composite BIGINT, VARCHAR, or CHAR of HASH?
Given the following: **Target platform**: MySQL 5.7, using InnoDB. **Scenario**: Storing hundreds of millions of email addresses (plus some properties not used in queries). All queries will be done by knowing the email address beforehand. **Proposed solution**: 1. SHA256 the email address. 2. Shard...
Given the following: **Target platform**: MySQL 5.7, using InnoDB. **Scenario**: Storing hundreds of millions of email addresses (plus some properties not used in queries). All queries will be done by knowing the email address beforehand. **Proposed solution**: 1. SHA256 the email address. 2. Shard "the email table" by taking the first 3 bytes of the ASCII SHA256, creating 4096 tables (from 0x000 to 0xFFF) that will act as buckets for the email addresses. This tries to avoid having one single huge table. **Question**: Which of the following would be a good PK to use inside each one of those 4096 buckets in terms of performance (indexing as being more important than querying)? Use of disk space might not be that important upfront (unless there are some heavy arguments to take this into account, which I'm open to know and discuss, of course). 1. VARCHAR(255) of the email address? This is of course the simplest. 2. CHAR(64) of the ASCII SHA256 hash of the email address? Long shot: I'm considering that indexing and comparing fixed length strings (CHAR) is faster than variable length strings (VARCHAR). 3. Split the SHA256 into 8 64bit integers, then create a composite PK of 8 BIGINT columns and index/query by those 8 BIGINT columns instead of using a VARCHAR/CHAR? Crazy idea: Perhaps using only 64bit integers for indexing and querying can provide a noticeable improvement in index and query performance (and also perhaps in disk access/storage). Although this is a composite PK of 8 BIGINT columns :\ Thanks in advance,
marcelog (111 rep)
Apr 26, 2017, 12:47 PM • Last activity: Jun 13, 2019, 12:01 PM
1 votes
0 answers
16 views
Elastic: Are Analyzers only for `text` type fields?
Most of the documentation about Elastic Analyzers assumes that you NEED an Analyzer and so the documentation talks mostly about how Analyzers work and the differences between the different Analyzers. **But I had trouble finding authoritative documentation answering the question: are Elastic Analyzer...
Most of the documentation about Elastic Analyzers assumes that you NEED an Analyzer and so the documentation talks mostly about how Analyzers work and the differences between the different Analyzers. **But I had trouble finding authoritative documentation answering the question: are Elastic Analyzers only for text type fields?**
Trevor Boyd Smith (111 rep)
Sep 20, 2018, 05:02 PM • Last activity: Sep 20, 2018, 05:10 PM
12 votes
1 answers
8501 views
When to use multiple tables in DynamoDB?
The DyanmoDB [best practices](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-general-nosql-design.html) make it clear that: > You should maintain as few tables as possible in a DynamoDB application. Most well designed applications require only one table. I find it amusing then t...
The DyanmoDB [best practices](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-general-nosql-design.html) make it clear that: > You should maintain as few tables as possible in a DynamoDB application. Most well designed applications require only one table. I find it amusing then that just about every single tutorial I've seen dealing with DyanmoDB has a multi-table design. But what does this mean in practice? Let's consider a simple application with three main entities: Users, Projects, and Documents. A User owns multiple projects, and a Project can have multiple Documents. We typically have to query on the Projects for a User, and on the Documents for a Project. Reads outnumber writes by a significant margin. A naive tutorial's table design would use three tables: Users Hash key user-id Projects Hash key Global Index project-id user-id Documents Hash key Global Index document-id project-id We could pretty easily collapse Project and Document into one Documents table: Documents Hash key Sort key Global Index project-id document-id user-id But why stop there? Why not one table to rule them all? Since the User is the root of everything... Users Hash key Sort key user-id aspect --------- --------- foo user email: foo@bar.com ... foo project:1 title: "The Foo Project" foo project:1:document:2 document-id: 2 ... Then we would have a Global Index on, say, the email field for user record lookups, and another on the document-id field for direct document lookups. Is that how it's supposed to work? Is it legit to throw such wildly-divergent kinds of data into the same table? Or is the second, two-table design a better approach? At what point would it be correct to add a second table?
David Eyk (537 rep)
Apr 18, 2018, 10:31 PM • Last activity: May 9, 2018, 11:45 PM
3 votes
0 answers
1474 views
Modeling a database structure for a trip management business domain
I am sketching up a database design and it is giving me some troubles. Basically something just "smells" about this design but I can't seem to arrive at a better way to do it. For example, the Joins needed to get back to `Person` table could be ugly. **Business rules** - A `Person` can go on many `T...
I am sketching up a database design and it is giving me some troubles. Basically something just "smells" about this design but I can't seem to arrive at a better way to do it. For example, the Joins needed to get back to Person table could be ugly. **Business rules** - A Person can go on many Trips. - Each Trip has many Destinations for its many participants (Person). - Every potential participant has an option to RSVP for a given Trip. If they RSVP for a Trip they must then RSVP for each Destination on that Trip. - Each Destination has an optional agenda (DestinationComponent) that each participant can RSVP for. - Assume that each RSVP relationship (at every level) will "contain many more data columns" required for information about the particular RSVP - for example, each participant can Vote on each Destination and DestinationComponent (Vote column has been omitted in RSVP tables within diagram). **Current diagram** This is the diagram I have created so far: enter image description here **Questions** - Is there a better way to manage these relationships? - A "master 'junction' table" for RSVPs and one for votes? I'm worried those tables would result in a massive overtime. Guidance would very much be appreciated!
thehnrytrapezoid (31 rep)
Sep 13, 2017, 12:57 AM • Last activity: Mar 4, 2018, 08:11 AM
1 votes
1 answers
193 views
The Logical Type of The Document in CouchBase
How should the logical type of a document get specified in CouchBase? Using a field for the type? Or employing separators in keys like `product::app::123id`? Currently I'm putting a document type, inside the document itself, inside a string field named Type like say Product. But I see this pattern o...
How should the logical type of a document get specified in CouchBase? Using a field for the type? Or employing separators in keys like product::app::123id? Currently I'm putting a document type, inside the document itself, inside a string field named Type like say Product. But I see this pattern of using separators in document id, like product::app::123id. Done some playing around it, couldn't get the type part (product) from the key (Of-course parsing it, by splitting is possible, which to me seems to have same overhead in both N1QL and views). So how should the document type (app logic type) get specified? Env: CouchBase Community, inside Docker on Ubuntu 14.04, using Go client gocb.
Kaveh Shahbazian (113 rep)
Feb 7, 2017, 01:41 PM • Last activity: Feb 7, 2017, 02:22 PM
2 votes
2 answers
102 views
Is it needed to add additional column to make this clustered index unique?
I have a table as listed below in SQL Server 2012. There is a clustered index on RequisitionID – but this column is not unique. There can be many ProductID for one RequisitionID. CREATE TABLE [dbo].[RequisitionProducts]( [RequisitionID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Qty] [int] NOT NUL...
I have a table as listed below in SQL Server 2012. There is a clustered index on RequisitionID – but this column is not unique. There can be many ProductID for one RequisitionID. CREATE TABLE [dbo].[RequisitionProducts]( [RequisitionID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Qty] [int] NOT NULL, [VendorID] [int] NOT NULL, [UnitAmount] [decimal](10, 2) NOT NULL, CONSTRAINT [pk_RequisitionProducts] PRIMARY KEY NONCLUSTERED ( [RequisitionID] ASC, [ProductID] ASC ) ) CREATE CLUSTERED INDEX [cidx_RequistionProducts] ON [dbo].[RequisitionProducts] ( [RequisitionID] ASC ) GO I searched a lot and found that Clustered Index can be non-unique - but only on limited scenario. Only scenario mentioned appropriate is when there is a [Range Search](https://technet.microsoft.com/en-us/library/ms191311(v=sql.105).aspx) . In my case almost all queries will be based on RequisitionID only – and there is no range search required. Should I add ProductID also to make the clustered index unique? What are the pros and cons?
LCJ (900 rep)
Sep 14, 2016, 01:56 AM • Last activity: Sep 14, 2016, 01:12 PM
1 votes
1 answers
199 views
Multiple Non-Indexed Views with INCLUDEs vs. Multiple Indexed Views in High Write Situations
When you want to encapsulate your T-SQL to select different subsets of data from the same base tables, is it more efficient to use plain views in conjunction with nonclustered indexes and INCLUDEs on the base tables, or is it better to use multiple indexed views, even when writes are frequent? Backg...
When you want to encapsulate your T-SQL to select different subsets of data from the same base tables, is it more efficient to use plain views in conjunction with nonclustered indexes and INCLUDEs on the base tables, or is it better to use multiple indexed views, even when writes are frequent? Background: I've run into a design issue that could cause me a lot of problems down the line if I handle it incorrectly, so I'd like some feedback on how best to approach it. Essentially, I have a series of tables consisting mainly of float columns which I need to join in hundreds of queries, each of which retrieves myriad subsets of the same columns and joins them together in similar but not always identical ways. For ease of maintenance, code legibility, modularization and the like, I'd to encapsulate as much of the commonalities in T-SQL across the queries as possible. For example, in the sample code below, I select a slightly different list of columns in the second query than in the first, plus join to a third table; there are dozens of such permutations of similar SQL statements scattered across hundreds of queries. Most of the queries occur in stored procedures that perform one or more UPDATEs, plus some rare DELETEs or INSERTs. Stored Procedure Example 1 ------------------------------- ; WITH CTE1 AS (SELECT T1.ID, Column1, Column2, Column3, Column4, Column3InTable2, Column5InTable3 FROM Table1 AS T1 INNER JOIN Table2 AS T2 ON T1.ID = T2.ForeignKeyID ) UPDATE T1 SET Column1 = Whatever FROM Table1 AS T1 INNER JOIN CTE1 AS T1 ON T1.ID = T2.ID Stored Procedure Example 2 --------------------------------- ; WITH CTE1 AS ( SELECT T1.ID, Column1, Column2, Column3, Column5, Column3InTable2, Column2InTable3, Column3InTable3 FROM Table1 AS T1 INNER JOIN Table2 AS T2 ON T1.ID = T2.ForeignKeyID INNER JOIN Table3 AS T3 ON T1.ID = T3.ForeignKeyID ) UPDATE T1 SET Column3InTable3 = Whatever FROM Table1 AS T1 INNER JOIN CTE1 AS T1 ON T1.ID = T2.ID What I'd like to use are simplified retrieval structures like this: CREATE VIEW View1 AS SELECT T1.ID, Column1, Column2, Column3, Column4, Column3InTable2, Column5InTable3 FROM Table1 AS T1 INNER JOIN Table2 AS T2 ON T1.ID = T2.ForeignKeyID CREATE VIEW View2 AS SELECT T1.ID, Column1, Column2, Column3, Column5, Column3InTable2, Column2InTable3, Column3InTable3 FROM Table1 AS T1 INNER JOIN Table2 AS T2 ON T1.ID = T2.ForeignKeyID INNER JOIN Table3 AS T3 ON T1.ID = T3.ForeignKeyID Stored Procedure Example 1 Updated ------------------------------------- UPDATE T1 SET Column1 = Whatever FROM View1 Stored Procedure Example 2 Updated ------------------------------------- UPDATE View2 SET Column3InTable3 = Whatever From experience, I've already learned that retrieving the data through table-valued functions leads to poor performance, which improved dramatically when I created a single indexed temporary table on all of the combinations of columns these queries need. Unfortunately, creating different temp tables that retrieve only the subsets of data I need for each query quickly turns into a maintenance and coordination nightmare. Therefore, I still need to refer to complicated joins to the same broad temp table in every procedure, which doesn't help me modularize things at all. Ideally, I'd like to use views to encapsulate the code (look at how much easier it is to read the samples above when they refer to views), but I imagine that creating different indexed views for each of the dozens of base queries I need would rapidly degrade performance, since all of the base tables are frequently updated on almost a 1:1 basis for each read. Could I get around this by instead using a series of non-indexed views to operate only on the columns I need, while indexing only the base tables with a series of INCLUDE clauses tailored to each subset of columns, or am I doomed to run into the same performance degradation due to the frequent UPDATEs? Thankfully, I only need to update one base table in any given statement, so the restriction against updating multiple tables in a single view isn't an issue (typically, I only need to retrieve the other tables in order to calculate the new values of the updated columns). It's mainly the frequency of the updates which is complicating my efforts at encapsulation. After reading these Microsoft articles on Designing Indexed Views and indexing as well as the replies in the thread Using indexed views for aggregates - too good to be true? I haven't yet seen anything that would discourage me from using this approach; perhaps there's a better one I haven't thought of yet though. I've also toyed with the idea of building these views upon each other hierarchically to save even more code, but don't know if that would further complicate things. Thanks in advance for any advice.
SQLServerSteve (123 rep)
Aug 23, 2016, 03:17 AM • Last activity: Aug 23, 2016, 04:54 PM
9 votes
1 answers
1659 views
Is the WHERE-JOIN-ORDER-(SELECT) rule for index column order wrong?
I am trying to improve this (sub-) query being part of a larger query: select SUM(isnull(IP.Q, 0)) as Q, IP.OPID from IP inner join I on I.ID = IP.IID where IP.Deleted=0 and (I.Status > 0 AND I.Status https://www.mssqltips.com/sqlservertutorial/3208/use-where-join-orderby-select-column-order-when-cr...
I am trying to improve this (sub-) query being part of a larger query: select SUM(isnull(IP.Q, 0)) as Q, IP.OPID from IP inner join I on I.ID = IP.IID where IP.Deleted=0 and (I.Status > 0 AND I.Status https://www.mssqltips.com/sqlservertutorial/3208/use-where-join-orderby-select-column-order-when-creating-indexes/ to address the key lookups and create an index seek: CREATE NONCLUSTERED INDEX [IX_I_Status_1] ON [dbo].[Invoice]([Status], [ID]) The extracted query immediately used this index. But the original larger query it is part of, didn't. It did not even use it when I forced it to using WITH(INDEX(IX_I_Status_1)). After a while I decided to try another new index and changed to order of the indexed columns: CREATE NONCLUSTERED INDEX [IX_I_Status_2] ON [dbo].[Invoice]([ID], [Status]) WOHA! This index was used by the extracted query and also by the larger query! Then I compared the extracted queries IO statistics by forcing it to use [IX_I_Status_1] and [IX_I_Status_2]: Results [IX_I_Status_1]: Table 'I'. Scan count 5, logical reads 636, physical reads 16, read-ahead reads 574 Table 'IP'. Scan count 5, logical reads 1134, physical reads 11, read-ahead reads 1040 Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0 Results [IX_I_Status_2]: Table 'I'. Scan count 1, logical reads 615, physical reads 6, read-ahead reads 631 Table 'IP'. Scan count 1, logical reads 1024, physical reads 5, read-ahead reads 1040 Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0 OK, I could understand that the mega-large-monster query maybe just too complex to make SQL server catch the ideal execution plan and may miss my new index. But I don't understand why the index [IX_I_Status_2] seems to be more suitable and more efficient for the query. Since the query first of all filters table I by column STATUS and then joins with table IP, I don't understand why [IX_I_Status_2] is better and used by Sql Server instead of [IX_I_Status_1]?
Magier (4827 rep)
May 25, 2016, 09:46 AM • Last activity: May 25, 2016, 06:32 PM
Showing page 1 of 12 total questions