Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
109
views
How does the graph database (Neo4j) solve the `O(n)` problem as data grows compared to RDBMs
A training course in the Neo4j Academy mentions that: > When querying across tables, the joins are computed at read-time, > using an index to find the corresponding rows in the target table. The > more data added to the database, the larger the index grows, the > slower the response time. > > This p...
A training course in the Neo4j Academy mentions that:
> When querying across tables, the joins are computed at read-time,
> using an index to find the corresponding rows in the target table. The
> more data added to the database, the larger the index grows, the
> slower the response time.
>
> This problem is known as the “Big O” or O(n) notation.
This sounds quite vague to me because when the data grows up, even for the graph database this also means that more relationships are added to the graph, and so the query response time will also be slower?
Kt Student
(115 rep)
Mar 7, 2025, 04:55 AM
• Last activity: Mar 7, 2025, 06:29 PM
0
votes
1
answers
32
views
Why `MATCH (p:Person)-[:ACTED_IN]->(m) WHERE 'Neo' IN r.roles RETURN p.name` returns 3 rows for only 1 node?
In Neo4j, when I use the following query: MATCH (p:Person)-[:ACTED_IN]->(m) WHERE 'Neo' IN r.roles RETURN p then it returns only one Person node. But when I change the query to: MATCH (p:Person)-[:ACTED_IN]->(m) WHERE 'Neo' IN r.roles RETURN p.name then it returns 3 rows. [![Neo4j query results][1]]...
In Neo4j, when I use the following query:
MATCH (p:Person)-[:ACTED_IN]->(m) WHERE 'Neo' IN r.roles RETURN p
then it returns only one Person node.
But when I change the query to:
MATCH (p:Person)-[:ACTED_IN]->(m) WHERE 'Neo' IN r.roles RETURN p.name
then it returns 3 rows.
This is strange to me since I expect there should be only one row returned?

Kt Student
(115 rep)
Feb 7, 2025, 03:18 AM
• Last activity: Feb 7, 2025, 05:43 PM
0
votes
0
answers
27
views
Ray(Anyscale) + Neo4j Drivers
For certain reasons I am using AnyScale to parallelize computation of nodes and edges to insert into my Neo4j DB. I saw on the Neo4j website that they recommend having only one driver open for each application since due to it having to set up a connection pool every time and having multiple drivers...
For certain reasons I am using AnyScale to parallelize computation of nodes and edges to insert into my Neo4j DB.
I saw on the Neo4j website that they recommend having only one driver open for each application since due to it having to set up a connection pool every time and having multiple drivers connected at the same time can be costly.
Is there any significant performance downside to having each of nodes owning it's own driver connection or should I apply something like MapReduce to ensure that a minimal number of nodes can connect to the Neo4j db?
Kevin Y.
(1 rep)
Oct 31, 2024, 04:00 PM
0
votes
0
answers
175
views
How to create multiple databases in Neo4j 5.24.2?
I have tried opening a cypher shell to create a new Neo4j database and failed as follows: ```cypher neo4j@neo4j> :use system neo4j@system> CREATE DATABASE demo; Unsupported administration command: CREATE DATABASE demo ``` I attempted to find some solutions in stackoverflow, some says that I should u...
I have tried opening a cypher shell to create a new Neo4j database and failed as follows:
neo4j@neo4j> :use system
neo4j@system> CREATE DATABASE demo;
Unsupported administration command: CREATE DATABASE demo
I attempted to find some solutions in stackoverflow, some says that I should uncomment this line in neo4j.conf:
initial.dbms.default_database=neo4j
I did it and restart my Neo4j server but this problem was still there.
Captain_Lee
Oct 17, 2024, 06:52 AM
• Last activity: Oct 19, 2024, 07:50 PM
7
votes
1
answers
394
views
Do any of the graph based/aware databases have good mechanisms for maintaining referential integrity?
Do any of the graph-based/graph-aware databases (Neo4j, ArangoDB, OrientDB, or other) have mechanisms for maintaining referential integrity on a par with those offered by relational databases? I'm exploring various document-based databases to find a suitable engine to use for adding an auxiliary dat...
Do any of the graph-based/graph-aware databases (Neo4j, ArangoDB, OrientDB, or other) have mechanisms for maintaining referential integrity on a par with those offered by relational databases?
I'm exploring various document-based databases to find a suitable engine to use for adding an auxiliary data storage to a certain project.
I discovered graph-based/multimodel databases and they seemed like a good idea, but I was surprised to find that they don't seem to offer the same level of protection of relations/links/edges that modern relational databases have.
In particular, I'm talking about linking deletions of entities/vertices with deletion of links/edges. In a relational database, I can have a foreign key constraint that links records from one table with records in another table, and will either
1. prevent deletion of record in table A if it's referenced by record in table B ("on delete no action"), or
2. delete the referencing record(s) in table B if a referenced record in table A is being deleted.
I expected to find a similar mechanics in graph-aware databases. For example, if a "comment" vertex links to a "post" vertex (forming a many-to-1 relation), then there are the following problems/challenges to solve:
1. Prevent deletion of a post while there are edges from comments to this post. This way, a comment could never have a dangling link/edge to a post. The solution would be: depending on the link/edge properties, either
1. prevent deletion of a post until all edges from comments to this post are deleted, or
2. delete all comments linking to this post when the post is being deleted.
2. Prevent deletion of an edge from a comment to a post without deleting the comment itself, to prevent the comment from not having a link/edge to a post at all.
3. Only allow creation of a comment if an edge is created to link this comment to a post at the same time.
Are mechanisms like this really lacking in graph-based databases, or was I just unable to find them?
I know that OrientDB has the "link" data type that probably solves the second and the third problem (if a link-typed property is declared mandatory and non-null, then it's impossible to create a record without specifying the link destination, and later it's impossible to break the link by un-setting the property).
However, as far as I remember, it's possible to delete the record which a link-typed property points to, thus producing a dangling link (so the first problem is not solved).
I also know that in certain databases I can use nested documents as an alternative to having multiple linked documents. However, this approach doesn't scale well (for cases when the number of linking records grows can grow indefinitely). Also, it is quite limited (it can't be used as an alternative when several links are needed, say, to a post and to a user; there are other important limitations, too).
pvgoran
(171 rep)
Apr 10, 2020, 05:56 AM
• Last activity: Jul 30, 2024, 08:06 PM
16
votes
1
answers
6841
views
Amount of data per node in Neo4j
I need to store substantial amounts of data per node in Neo4j. The data is Unicode chunks of text. Actually not every node will have big chunks, but many of them will. I waded through the documentation but didn't find any mention on the Node size – the amount of data a single node can contain. Does...
I need to store substantial amounts of data per node in Neo4j. The data is Unicode chunks of text. Actually not every node will have big chunks, but many of them will.
I waded through the documentation but didn't find any mention on the Node size – the amount of data a single node can contain.
Does anyone have any idea?
treecoder
(263 rep)
Aug 24, 2011, 01:11 PM
• Last activity: Feb 27, 2024, 11:34 AM
1
votes
1
answers
314
views
How to start Neo4j on Azure
I'm running Neo4j v4.x on an Azure VM (Linux Ubunto 16.04). Everything was running fine until my project required a resizing for more storage. This seemingly went well as I proceeded to load data, but then a glitch of unknown type caused the Neo4j server to stop. I could no longer access it from the...
I'm running Neo4j v4.x on an Azure VM (Linux Ubunto 16.04). Everything was running fine until my project required a resizing for more storage. This seemingly went well as I proceeded to load data, but then a glitch of unknown type caused the Neo4j server to stop. I could no longer access it from the Neo4j broweser or from python code that queries it. I've restarted the VM. No effect. Even redeployed the VM without effect. From Putty:
neo4j status
Neo4j is not running
I've tried numerous commands, but this one comes closest to trying to do something:
sudo ssh {myusername}@{vm ip} "systemctl restart neo4j V4_0.service"
It asked for my password and accepted it (rejected errors) and then gave me
> Failed to restart neo4j.service: Interactive authentication required.
See system logs and 'systemctl status neo4j.service' for details.
Failed to restart V4_0.service: Interactive authentication required.
See system logs and 'systemctl status V4_0.service' for details.
I thought I'd authenicated. In debugging, i tried entering bad username or password and was immediately rejected as expected.
Another set of info generated by the command
root@neo4jVM:/etc/init.d# neo4j start
result:
> Directories in use:
home: /var/lib/neo4j
config: /etc/neo4j
logs: /var/log/neo4j
plugins: /var/lib/neo4 j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/run/neo4j Starting Neo4j.
WARNING: Max 1024 open files allowed,minimum of 40000 recommended. See the Neo4j manual. Started neo4j (pid> 71995). It is available at http://0.0.0.0:7474/ There may be a shortdelay until the server is ready. See /var/log/neo4j/neo4j.log for current status.
Despite the promising message, it never started and Http://0.0.0.0:7474 did not connect.
for clarification, I tried variations on the one command in Putty: sudo ssh dastumpf@104.43.228.191 "systemctl restart neo4j" sudo ssh dastumpf@104.43.228.191 "systemctl restart neo4j.service" etc.
Exploring the Neo4j log, I found this ....LifecycleManagingDatabaseService@78116659' was successfully initialized, but failed to start.
What am I missing?
David A Stumpf
(121 rep)
Jul 17, 2020, 03:55 AM
• Last activity: Jan 11, 2024, 12:03 PM
1
votes
1
answers
34
views
neo4j breadcrumbs query
I want to write a query to receive all parents till root node from any child node, ordered: Here is the graph: [![enter image description here][1]][1] I found some solutions, but result have some duplications or nodes weren't ordered. Thanks in advance. [1]: https://i.sstatic.net/jDqkY.png
I want to write a query to receive all parents till root node from any child node, ordered:
Here is the graph:
I found some solutions, but result have some duplications or nodes weren't ordered.
Thanks in advance.

Darii Petru
(142 rep)
Jul 10, 2023, 06:59 AM
• Last activity: Jul 11, 2023, 11:39 AM
0
votes
1
answers
111
views
Postgres JSONs transformation
I would very appreciate your help here, an important question about architecture modification. - We use the ETL process to fetch data from external services (for example, Github). - Extract the data (e.g. issues and PRs) and create raw_data objects - Transform the data to our unique objects - let's...
I would very appreciate your help here, an important question about architecture modification.
- We use the ETL process to fetch data from external services (for example, Github).
- Extract the data (e.g. issues and PRs) and create raw_data objects
- Transform the data to our unique objects - let's call them assets.
- Load the assets as JSONs (jsonb field) to postgres DB (general assets table).
We have a few problems with this approach and now we are considering changing our pipeline.
**Bidirectional asset connection**: our assets have a direct relationship between them, for example, every user has groups, and every group has users. Currently, we manually fill the data from one of the sides again and again (before loading the data to Postgres). We manage this in-memory (scale has entered the meeting)
**Very slow analysis**: we need the ability to present the whole JSON, but we also need to make an analysis between all of the assets. The problem is that if we have a lot of assets (a lot of JSONs inside of the assets table), the analysis is very slow. For example, 'find all issues that are related to the pull request', in this case, we need to iterate over all of the internal JSONs and search with regex on a specific field.
What would you recommend in this case? Suggestions:
**Manage it in Postgres**, use functions to convert JSONs into tables, and create triggers to fill the Bidirectional relation.
**Data warehouse**? I lack knowledge on the subject, but in general not sure it will be ideal, first of all, because of OLAP vs OLTP, we need to make analysis but also the possibility to present the whole rows. And how would we fill the bidirectional connection?
**GraphDB**? Sounds ideal for the bidirectional asset, but not sure about scaling problems.
What do you think guys?
What would you do in this case?
Ewen Field
(1 rep)
Feb 28, 2023, 11:15 AM
• Last activity: Mar 1, 2023, 10:17 AM
2
votes
1
answers
184
views
Best free graph database for order of 500 million nodes
I am using neo4j community server edition. It takes 1 minute to find nodes connected across 3 layers. Are there other free graph databases which can perform this type of query faster. The dataset is smaller than RAM of the PC.
I am using neo4j community server edition. It takes 1 minute to find nodes connected across 3 layers. Are there other free graph databases which can perform this type of query faster. The dataset is smaller than RAM of the PC.
Rohit Raj
(121 rep)
Feb 22, 2023, 08:41 AM
• Last activity: Feb 27, 2023, 10:07 AM
0
votes
0
answers
64
views
Neo4j to MS SQL ETL
There is Neo4j graph database storing devices data, I need those data need to out in MS SQL Reporting Data warehouse database. So essentially, I need conversion from Graph Database into RDMBS strong type and schema based tables. I wanted to understand what the various way we can do this ETL and what...
There is Neo4j graph database storing devices data, I need those data need to out in MS SQL Reporting Data warehouse database. So essentially, I need conversion from Graph Database into RDMBS strong type and schema based tables. I wanted to understand what the various way we can do this ETL and what among best it can give best performance As the data syncing need to be near to real time (max 30 min. older data can be fine).
Please advice.
Thanks in advance!
Bhavesh Harsora
(141 rep)
Dec 20, 2022, 12:11 PM
0
votes
1
answers
103
views
How to evaluate Database benchmarks ? What to consider given a specific example?
I'm trying to figure out which database to use for a project which is supposed to implement a temporal property graph model and I am looking into some benchmarks for that. I found some papers which provided some insights and results and I also found this benchmark from TigerGraph: https://www.tigerg...
I'm trying to figure out which database to use for a project which is supposed to implement a temporal property graph model and I am looking into some benchmarks for that. I found some papers which provided some insights and results and I also found this benchmark from TigerGraph:
https://www.tigergraph.com.cn/wp-content/uploads/2021/07/EN0302-GraphDatabase-Comparision-Benchmark-Report.pdf
Does anyone have any idea why ArangoDB is performing so poorly here ? Especially in comparison to Neo4j ?
Furthermore, any preferences regarding a NoSQL database which consistently needs to write data while answering mostly queries which result in large subtrees ?
EDIT: Also - if someone has links to other benchmarks i'd welcome that.
L.Rex
(101 rep)
Nov 14, 2022, 06:43 PM
• Last activity: Nov 30, 2022, 09:42 PM
0
votes
1
answers
422
views
Shortest paths on huge graphs: Neo4J or OrientDB?
Kia Ora, I have a program that very frequently requires finding the fastest path (both the node sequence and total cost/length) on graphs containing ~50k nodes. Per run, I require on the order of millions of shortest path requests. I have just finished an OrientDB implementation which has significan...
Kia Ora,
I have a program that very frequently requires finding the fastest path (both the node sequence and total cost/length) on graphs containing ~50k nodes. Per run, I require on the order of millions of shortest path requests. I have just finished an OrientDB implementation which has significantly improved the compute time over my initial, non-graphDB attempt (which simply crashed). To perform testing, I am running the server locally on a series of distributed machines.
However, in theory, would Neo4J, or another such platform, be faster again? What gains could I expect to receive? Could I host this process online, for example?
Ngā mihi.
Jordan MacLachlan
(3 rep)
Dec 10, 2021, 04:18 AM
• Last activity: Aug 23, 2022, 06:00 PM
-1
votes
2
answers
503
views
Graph Database - data modelling - linking multiple pairs of nodes using the same edge?
I am trying understand the concepts, which apply to data modeling using Graphs - specifically on SQL Server 2019. One thing I am unsure of is can the same edge be used to connect different pairs of nodes: - If I have three nodes PowerBI, SSAS, SQL Server - The in the pipeline i am trying to model th...
I am trying understand the concepts, which apply to data modeling using Graphs - specifically on SQL Server 2019.
One thing I am unsure of is can the same edge be used to connect different pairs of nodes:
- If I have three nodes PowerBI, SSAS, SQL Server
- The in the pipeline i am trying to model there are two 'Connects To' relationships:
Power BI -> SSAS -> SQL Server
- Can I use a single edge 'Connects To' to store the relationship between Power BI -> SSAS and also the relationship between SSAS -> SQL Server or should these be two separate edges?
Previously, have worked on both OLTP and OLAP databases. However, there seems to be a much smaller body of knowledge on best practices for developing data models using graphs.
Soap
(11 rep)
Dec 6, 2019, 05:51 AM
• Last activity: Nov 6, 2020, 07:23 AM
2
votes
1
answers
450
views
How to maintain order when modeling outline in graph database?
([TaskPaper](https://www.taskpaper.com/)) outlines have a natural hierarchy relationship, so a graph database seems like a great fit. Each item in the outline is a node of the graph that points to its parent and children. I also really like the idea of modeling TaskPaper tags as nodes that any tagge...
([TaskPaper](https://www.taskpaper.com/)) outlines have a natural hierarchy relationship, so a graph database seems like a great fit. Each item in the outline is a node of the graph that points to its parent and children. I also really like the idea of modeling TaskPaper tags as nodes that any tagged items point to.
However, there is one problem: the order of items is important in an outline, but a simple graph doesn't keep that information. What is the best way to maintain the order of items when an outline is stored as a graph?
The method should be efficient when items are added/removed. New items may be inserted before existing items, so a simple increasing counter/timestamp won't work.
The goal is to store and query a giant outline that can't fit in memory (just the necessary items are streamed to/from the user). Two common ways of querying this outline would be:
- Linear: get the first n=100 items when everything is "expanded."
- Breadth first, with depth first traversal of certain items: only get the top level items, fulling expanding a few selected items. (If fully expanding an item would exceed n=100 items, stop expanding.)
Common updates would be:
- Add/delete a new item somewhere in the hierarchy. Most inserts will probably be appending after the last child of an item. (Basic editing.)
- Add/delete a new sub-tree (Copy-paste a section of an outline.)
I don't think other database types would be better than a graph database, but I'm open to using other databases (relational, NoSQL, etc).
Leftium
(769 rep)
Mar 2, 2020, 07:12 PM
• Last activity: Nov 6, 2020, 04:11 AM
1
votes
1
answers
160
views
Neo4j import performance
I have a Neo4j 3.5.3 installation on my Ubuntu laptop (which is Intel i5, 4 GB RAM, SSD disk) and I'm trying to import a moderate-sized dataset from CSV files into the graph. Here is the full Cypher script I use: MATCH (x :Shop ) DETACH DELETE x RETURN count(*) AS DeletedShops; MATCH (x :Postal ) DE...
I have a Neo4j 3.5.3 installation on my Ubuntu laptop (which is Intel i5, 4 GB RAM, SSD disk) and I'm trying to import a moderate-sized dataset from CSV files into the graph.
Here is the full Cypher script I use:
MATCH (x :Shop ) DETACH DELETE x RETURN count(*) AS DeletedShops;
MATCH (x :Postal ) DETACH DELETE x RETURN count(*) AS DeletedPostal;
MATCH (x :City ) DETACH DELETE x RETURN count(*) AS DeletedCities;
MATCH (x :Locator ) DETACH DELETE x RETURN count(*) AS DeletedLocators;
MATCH (x :Brand ) DETACH DELETE x RETURN count(*) AS DeletedBrands;
MATCH (x :Industry ) DETACH DELETE x RETURN count(*) AS DeletedIndustries;
USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM 'file:///accounts.csv.gz' AS csv
//locatorschemaname,accountname,clientname,clientindustry,accountindustry,locationcount
MERGE (b:Brand {name: csv.clientname})
FOREACH(n IN (CASE WHEN csv.clientindustry IS NOT NULL AND NOT toLower(csv.clientindustry) IN ['na','unknown','other'] THEN ELSE [] END) |
MERGE (i:Industry {name: csv.clientindustry})
MERGE (b)-[:INDUSTRY]->(i)
)
FOREACH(n IN (CASE WHEN csv.accountindustry IS NOT NULL AND NOT toLower(csv.accountindustry) IN ['na','unknown','other'] THEN ELSE [] END) |
MERGE (i2:Industry {name: csv.accountindustry})
MERGE (b)-[:INDUSTRY]->(i2)
)
// Brand.shops = max(locator.shops)
FOREACH(n IN (CASE WHEN csv.locatorschemaname IS NOT NULL AND csv.locatorschemaname=csv.accountname THEN ELSE [] END) |
MERGE (l:Locator {name: csv.locatorschemaname, shops: toInt(csv.locationcount)})
MERGE (l)-[:BRAND]->(b)
SET b.shops = CASE
WHEN b.shops IS NULL OR b.shops (i:Industry)
WITH i.name AS iname, sum(b.shops) AS isum
MATCH (ii:Industry {name: iname})
SET ii.shops = isum;
// NOTE: This is done in multiple-pass mode, to avoid performance issues with NEo4j CE 3.4.7 on ubuntu
// NOTE: We do not use WITH HEADERS as it adds a 40%+ overhead
// PASS 1. Shops
USING PERIODIC COMMIT 10000
LOAD CSV FROM 'file:///locations.csv.gz' AS csv
// schemaname,clientkey,name,address1,address2,city,region,country,postalcode,latitude,longitude
// find existing locator node
MATCH (l :Locator {name: csv})
CREATE (s:Shop {
locatorname: csv,
clientkey: csv,
latitude: toFloat(csv),
longitude: toFloat(csv)
})
CREATE (s)-[:LOCATOR]->(l);
CREATE INDEX ON :Shop (locatorname, clientkey);
MATCH (:Shop) RETURN count(*) AS ShopsCreated;
// PASS 2. cities
USING PERIODIC COMMIT 10000
LOAD CSV FROM 'file:///locations.csv.gz' AS csv
WITH csv WHERE csv IS NOT NULL
MATCH (s:Shop {locatorname: csv, clientkey: csv})
MERGE (city :City {name: csv, country: csv}) ON CREATE SET city.region = csv
MERGE (s)-[:CITY]->(city);
MATCH (:City) RETURN count(*) AS CitiesCreated;
// PASS 3. postal codes
USING PERIODIC COMMIT 10000
LOAD CSV FROM 'file:///locations.csv.gz' AS csv
WITH csv WHERE csv IS NOT NULL
MATCH (s:Shop {locatorname: csv, clientkey: csv})
MERGE (postal :Postal {name: csv, country: csv}) ON CREATE SET postal.region = csv
MERGE (s)-[:POSTAL]->(postal);
MATCH (:Postal) RETURN count(*) AS PostalcodesCreated;
CREATE CONSTRAINT ON (i:Industry) ASSERT i.name IS UNIQUE;
CREATE CONSTRAINT ON (b:Brand) ASSERT b.name IS UNIQUE;
CREATE CONSTRAINT ON (l:Locator) ASSERT l.name IS UNIQUE;
The problem is, the scripts makes it fairly quickly to the end of "PASS 1", and then it apparently hangs on "PASS 2". The server process is churning CPU and nothing visible happens. This lasts at least 120 minutes and does not look like it's going to finish any time soon.
I use default settings for heap size etc. But on-disk size of the whole dataset (checked in /var/lib/neo4j/data) is ~~ 500 MB. So this machine should handle it.
Here is the output so far:
DeletedShops
0
DeletedPostal
0
DeletedCities
0
DeletedLocators
0
DeletedBrands
0
DeletedIndustries
0
IndustriesCreated
18
BrandsCreated
1326
LocatorsCreated
2092
ShopsCreated
937488
// very long wait here
And here is some
ps
output:
[11:50:56][filip@lap2:~/neo4j]$ ps fuwww pidof java
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
filip 4966 0.0 0.1 3979904 7412 pts/0 Sl+ 08:57 0:09 /usr/lib/jvm/java-8-oracle/bin/java -jar /usr/bin/../share/cypher-shell/lib/cypher-shell-all.jar -u neo4j -p neo --format plain
neo4j 2411 98.1 25.6 4862652 1012488 ? Ssl 08:50 177:11 /usr/bin/java -cp /var/lib/neo4j/plugins:/etc/neo4j:/usr/share/neo4j/lib/*:/var/lib/neo4j/plugins/* -server -Xms950m -Xmx950m -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -XX:+AlwaysPreTouch -XX:+UnlockExperimentalVMOptions -XX:+TrustFinalNonStaticFields -XX:+DisableExplicitGC -Djdk.tls.ephemeralDHKeySize=2048 -Djdk.tls.rejectClientInitiatedRenegotiation=true -Dunsupported.dbms.udc.source=debian -Dfile.encoding=UTF-8 org.neo4j.server.CommunityEntryPoint --home-dir=/var/lib/neo4j --config-dir=/etc/neo4j
How can I rewrite the script to achieve same thing faster?
What can I do to diagnose the bottleneck?
Is this at all doable using limited (1-2 GB) java heap size - and why not?
**Update**: here is result of a short strace -v -tt -f -s 100 -p
:
5020 13:04:53.108793 epoll_wait(238,
5013 13:04:53.108847 epoll_wait(235,
4887 13:04:53.108865 epoll_wait(232,
4879 13:04:53.108874 epoll_wait(229,
4617 13:04:53.108890 epoll_wait(226,
4590 13:04:53.108897 epoll_wait(223,
3557 13:04:53.108916 restart_syscall(
3016 13:04:53.108925 restart_syscall(
3015 13:04:53.108942 restart_syscall(
3014 13:04:53.108950 restart_syscall(
3013 13:04:53.108966 accept(256,
3012 13:04:53.108981 epoll_wait(263,
3011 13:04:53.108998 epoll_wait(260,
3010 13:04:53.109006 accept(248,
3009 13:04:53.109023 epoll_wait(255,
3008 13:04:53.109032 epoll_wait(252,
3007 13:04:53.109047 restart_syscall(
3005 13:04:53.109055 epoll_wait(220,
3001 13:04:53.109083 futex(0x7efc2999783c, FUTEX_WAIT_PRIVATE, 0, NULL
3000 13:04:53.109098 futex(0x7efc29995d0c, FUTEX_WAIT_PRIVATE, 0, NULL
2999 13:04:53.109116 futex(0x7efc2996e7d8, FUTEX_WAIT_PRIVATE, 0, NULL
2998 13:04:53.109125 futex(0x7efc2996eb68, FUTEX_WAIT_PRIVATE, 0, NULL
2997 13:04:53.109141 restart_syscall(
2996 13:04:53.109149 restart_syscall(
2995 13:04:53.109166 restart_syscall(
2994 13:04:53.109174 futex(0x7efc298c8df8, FUTEX_WAIT_PRIVATE, 0, NULL
2993 13:04:53.109190 restart_syscall(
2992 13:04:53.109198 restart_syscall(
2991 13:04:53.109214 restart_syscall(
2990 13:04:53.109222 restart_syscall(
2989 13:04:53.109239 restart_syscall(
2951 13:04:53.109248 restart_syscall(
2950 13:04:53.109266 futex(0x7efc2843f37c, FUTEX_WAIT_PRIVATE, 0, NULL
2949 13:04:53.109274 restart_syscall(
2948 13:04:53.109290 restart_syscall(
2947 13:04:53.109299 restart_syscall(
2946 13:04:53.109316 futex(0x7efc30ba6580, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, 0xffffffff
2945 13:04:53.109325 futex(0x7efc2842bb7c, FUTEX_WAIT_PRIVATE, 0, NULL
2932 13:04:53.109342 futex(0x7efc283f3f7c, FUTEX_WAIT_PRIVATE, 0, NULL
2927 13:04:53.109351 futex(0x7efc283f157c, FUTEX_WAIT_PRIVATE, 0, NULL
2924 13:04:53.109369 restart_syscall(
2621 13:04:53.109377 futex(0x7efc28048278, FUTEX_WAIT_PRIVATE, 0, NULL
2620 13:04:53.109395 futex(0x7efc28046578, FUTEX_WAIT_PRIVATE, 0, NULL
2619 13:04:53.109405 futex(0x7efc2803607c, FUTEX_WAIT_PRIVATE, 0, NULL
2618 13:04:53.109423 futex(0x7efc28034378, FUTEX_WAIT_PRIVATE, 0, NULL
2617 13:04:53.109432 futex(0x7efc2803277c, FUTEX_WAIT_PRIVATE, 0, NULL
2616 13:04:53.109449 futex(0x7efc28030b7c, FUTEX_WAIT_PRIVATE, 0, NULL
2615 13:04:53.109458 restart_syscall(
2614 13:04:53.109476 futex(0x7efc2802c67c, FUTEX_WAIT_PRIVATE, 0, NULL
2613 13:04:53.109485 futex(0x7efc2802aa78, FUTEX_WAIT_PRIVATE, 0, NULL
2612 13:04:53.109507 futex(0x7efc28028e78, FUTEX_WAIT_PRIVATE, 0, NULL
2611 13:04:53.109517 futex(0x7efc28027278, FUTEX_WAIT_PRIVATE, 0, NULL
2610 13:04:53.109536 futex(0x7efc2800fb78, FUTEX_WAIT_PRIVATE, 0, NULL
2411 13:04:53.109545 futex(0x7efc317f19d0, FUTEX_WAIT, 2610, NULL
2992 13:04:53.115544 ) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.115730 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.115828 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998460}
2997 13:04:53.118313 ) = -1 ETIMEDOUT (Connection timed out)
2997 13:04:53.118352 futex(0x7efc298e0128, FUTEX_WAKE_PRIVATE, 1) = 0
2997 13:04:53.118394 futex(0x7efc298e0178, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=99998722}
2951 13:04:53.125119 ) = -1 ETIMEDOUT (Connection timed out)
2951 13:04:53.125180 futex(0x7efc28442a28, FUTEX_WAKE_PRIVATE, 1) = 0
2951 13:04:53.125217 futex(0x7efc28442a78, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=49998764}
2992 13:04:53.126016 ) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.126360 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.126435 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9996862}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.136631 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.136735 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9997659}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.146911 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.147016 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998199}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.157194 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.157259 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998611}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.167441 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.167505 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998486}
2951 13:04:53.175350 ) = -1 ETIMEDOUT (Connection timed out)
2951 13:04:53.175419 futex(0x7efc28442a28, FUTEX_WAKE_PRIVATE, 1) = 0
2951 13:04:53.175469 futex(0x7efc28442a78, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=49999014}
2992 13:04:53.177624 ) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.177661 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.177704 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998753}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.187869 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.187926 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998345}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.198107 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.198173 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998906}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.208351 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.208445 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998819}
2997 13:04:53.218546 ) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.218607 ) = -1 ETIMEDOUT (Connection timed out)
2997 13:04:53.218627 futex(0x7efc298e0128, FUTEX_WAKE_PRIVATE, 1
2992 13:04:53.218642 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1
2997 13:04:53.218654 ) = 0
2992 13:04:53.218665 ) = 0
2997 13:04:53.218692 futex(0x7efc298e0178, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=99997816}
2992 13:04:53.218716 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998951}
2951 13:04:53.225620 ) = -1 ETIMEDOUT (Connection timed out)
2951 13:04:53.225685 futex(0x7efc28442a28, FUTEX_WAKE_PRIVATE, 1) = 0
2951 13:04:53.225726 futex(0x7efc28442a78, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=49998661}
2992 13:04:53.228844 ) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.228894 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.228977 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998496}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.239167 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.239237 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9997919}
2615 13:04:53.248889 ) = -1 ETIMEDOUT (Connection timed out)
2615 13:04:53.248964 futex(0x7efc2802ee28, FUTEX_WAKE_PRIVATE, 1) = 0
2615 13:04:53.249190 getrusage(RUSAGE_THREAD, {ru_utime={tv_sec=6, tv_usec=131107}, ru_stime={tv_sec=0, tv_usec=512377}, ru_maxrss=1415248, ru_ixrss=0, ru_idrss=0, ru_isrss=0, ru_minflt=3, ru_majflt=7, ru_nswap=0, ru_inblock=904, ru_oublock=0, ru_msgsnd=0, ru_msgrcv=0, ru_nsignals=0, ru_nvcsw=45187, ru_nivcsw=300}) = 0
2615 13:04:53.249267 futex(0x7efc2802ee78, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=299998402}
2992 13:04:53.249346 ) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.249374 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.249416 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9999076}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.259600 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.259669 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998583}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.269862 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.269930 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998318}
2951 13:04:53.275876 ) = -1 ETIMEDOUT (Connection timed out)
2951 13:04:53.275954 futex(0x7efc28442a28, FUTEX_WAKE_PRIVATE, 1) = 0
2951 13:04:53.276004 futex(0x7efc28442a78, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=49998347}
2992 13:04:53.280047 ) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.280096 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.280141 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998508}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.290333 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.290392 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998348}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.300555 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.300609 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998900}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.310804 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.310878 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998305}
2997 13:04:53.318882 ) = -1 ETIMEDOUT (Connection timed out)
2997 13:04:53.318980 futex(0x7efc298e0128, FUTEX_WAKE_PRIVATE, 1) = 0
2997 13:04:53.319107 futex(0x7efc298e0178, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=99998601}
2992 13:04:53.320991 ) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.321028 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.321071 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9999116}
2951 13:04:53.326159 ) = -1 ETIMEDOUT (Connection timed out)
2951 13:04:53.326225 futex(0x7efc28442a28, FUTEX_WAKE_PRIVATE, 1) = 0
2951 13:04:53.326265 futex(0x7efc28442a78, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=49997885}
2992 13:04:53.331172 ) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.331224 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.331276 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998352}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.341419 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.341466 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998473}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.351601 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.351647 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998359}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.361788 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.361834 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998171}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.371984 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.372032 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998375}
2951 13:04:53.376376 ) = -1 ETIMEDOUT (Connection timed out)
2951 13:04:53.376427 futex(0x7efc28442a28, FUTEX_WAKE_PRIVATE, 1) = 0
2951 13:04:53.376465 futex(0x7efc28442a78, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=49999429}
2992 13:04:53.382147 ) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.382195 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.382228 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998186}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.392366 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.392410 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998231}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.402554 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.402602 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998401}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.412743 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.412790 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998361}
2997 13:04:53.419251 ) = -1 ETIMEDOUT (Connection timed out)
2997 13:04:53.419300 futex(0x7efc298e0128, FUTEX_WAKE_PRIVATE, 1) = 0
2997 13:04:53.419343 futex(0x7efc298e0178, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=99999061}
2992 13:04:53.422887 ) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.422928 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.422961 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998399}
2951 13:04:53.426558 ) = -1 ETIMEDOUT (Connection timed out)
2951 13:04:53.426595 futex(0x7efc28442a28, FUTEX_WAKE_PRIVATE, 1) = 0
2951 13:04:53.426629 futex(0x7efc28442a78, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=49999301}
2992 13:04:53.433070 ) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.433119 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.433154 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998445}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.443295 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.443341 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998386}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.453473 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.453522 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998338}) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.463666 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.463712 futex(0x7efc2872ddc8, FUTEX_WAIT_PRIVATE, 0, {tv_sec=0, tv_nsec=9998111}
3002 13:04:53.471612 futex(0x7efc283e7a7c, FUTEX_WAKE_PRIVATE, 1) = 1
3002 13:04:53.471696 futex(0x7efc29998f7c, FUTEX_WAIT_PRIVATE, 0, NULL
2924 13:04:53.471717 ) = 0
2924 13:04:53.471734 futex(0x7efc283e7a28, FUTEX_WAKE_PRIVATE, 1) = 0
2924 13:04:53.471769 mprotect(0x7efc31817000, 4096, PROT_READ) = 0
2924 13:04:53.471815 mprotect(0x7efc31817000, 4096, PROT_READ|PROT_WRITE) = 0
2924 13:04:53.471849 mprotect(0x7efc31818000, 4096, PROT_NONE) = 0
2924 13:04:53.472428 futex(0x7efc2802c67c, FUTEX_WAKE_PRIVATE, 1) = 1
2614 13:04:53.472463 ) = 0
2924 13:04:53.472473 futex(0x7efc283e7a78, FUTEX_WAIT_PRIVATE, 0, NULL
2614 13:04:53.472482 futex(0x7efc2802c628, FUTEX_WAKE_PRIVATE, 1) = 0
2614 13:04:53.472508 futex(0x7efc28028e78, FUTEX_WAKE_PRIVATE, 1) = 1
2612 13:04:53.472530 ) = 0
2612 13:04:53.472544 futex(0x7efc28028e28, FUTEX_WAKE_PRIVATE, 1) = 0
2612 13:04:53.472569 futex(0x7efc28027278, FUTEX_WAKE_PRIVATE, 1) = 1
2611 13:04:53.472592 ) = 0
2611 13:04:53.472610 futex(0x7efc28027228, FUTEX_WAKE_PRIVATE, 1) = 0
2611 13:04:53.472640 futex(0x7efc2802aa78, FUTEX_WAKE_PRIVATE, 1) = 1
2992 13:04:53.473808 ) = -1 ETIMEDOUT (Connection timed out)
2992 13:04:53.473849 futex(0x7efc2872dd78, FUTEX_WAKE_PRIVATE, 1) = 0
2992 13:04:53.473891 futex(0x7efc2872d978, FUTEX_WAIT_PRIVATE, 0, NULL
2612 13:04:53.474207 sched_yield() = 0
2614 13:04:53.474236 sched_yield(
2613 13:04:53.474249 ) = 0
2614 13:04:53.474260 ) = 0
2613 13:04:53.474271 futex(0x7efc2802aa28, FUTEX_WAKE_PRIVATE, 1
2611 13:04:53.474283 sched_yield(
2613 13:04:53.474295 ) = 0
2611 13:04:53.474306 ) = 0
2613 13:04:53.474317 futex(0x7efc283e7a78, FUTEX_WAKE_PRIVATE, 1
2612 13:04:53.474335 sched_yield() = 0
2614 13:04:53.474365 sched_yield() = 0
2611 13:04:53.474414 sched_yield(
2613 13:04:53.474425 ) = 1
2611 13:04:53.474436 ) = 0
2612 13:04:53.474449 sched_yield(
2924 13:04:53.474460 ) = 0
2612 13:04:53.474470 ) = 0
2924 13:04:53.474480 futex(0x7efc283e7a28, FUTEX_WAKE_PRIVATE, 1
2614 13:04:53.474492 sched_yield(
2924 13:04:53.474503 ) = 0
2614 13:04:53.474514 ) = 0
2924 13:04:53.474525 futex(0x7efc283e7a7c, FUTEX_WAIT_PRIVATE, 0, NULL
2611 13:04:53.474544 sched_yield(
2613 13:04:53.474569 futex(0x7efc283e7a7c, FUTEX_WAKE_PRIVATE, 1
2612 13:04:53.474581 futex(0x7efc28028e7c, FUTEX_WAIT_PRIVATE, 0, NULL
2614 13:04:53.474600 futex(0x7efc2802c678, FUTEX_WAIT_PRIVATE, 0, NULL
2924 13:04:53.474610 ) = -1 EAGAIN (Resource temporarily unavailable)
2613 13:04:53.474619 ) = 0
2924 13:04:53.474627 futex(0x7efc283e7a28, FUTEX_WAIT_PRIVATE, 2, NULL
2613 13:04:53.474635 futex(0x7efc283e7a28, FUTEX_WAKE_PRIVATE, 1
2924 13:04:53.474642 ) = -1 EAGAIN (Resource temporarily unavailable)
2613 13:04:53.474650 ) = 0
2924 13:04:53.474658 futex(0x7efc283e7a28, FUTEX_WAKE_PRIVATE, 1
2613 13:04:53.474666 futex(0x7efc2802aa7c, FUTEX_WAIT_PRIVATE, 0, NULL
2924 13:04:53.474673 ) = 0
2611 13:04:53.474681 ) = 0
2924 13:04:53.474689 futex(0x7efc28028e7c, FUTEX_WAKE_PRIVATE, 1
2611 13:04:53.474697 futex(0x7efc2802727c, FUTEX_WAIT_PRIVATE, 0, NULL
2924 13:04:53.474707 ) = 1
2612 13:04:53.474715 ) = 0
2924 13:04:53.474723 futex(0x7efc283e7a78, FUTEX_WAIT_PRIVATE, 0, NULL
2612 13:04:53.474732 futex(0x7efc28028e28, FUTEX_WAKE_PRIVATE, 1) = 0
2612 13:04:53.474757 futex(0x7efc2802c678, FUTEX_WAKE_PRIVATE, 1
2614 13:04:53.474772 ) = 0
2612 13:04:53.474781 ) = 1
2614 13:04:53.474790 futex(0x7efc2802c628, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable)
2612 13:04:53.474829 futex(0x7efc2802c628, FUTEX_WAKE_PRIVATE, 1
2614 13:04:53.474838 futex(0x7efc2802c628, FUTEX_WAKE_PRIVATE, 1
2612 13:04:53.474848 ) = 0
2614 13:04:53.474856 ) = 0
2612 13:04:53.474865 futex(0x7efc28028e78, FUTEX_WAIT_PRIVATE, 0, NULL
2614 13:04:53.474874 futex(0x7efc2802aa7c, FUTEX_WAKE_PRIVATE, 1) = 1
2613 13:04:53.474895 ) = 0
2614 13:04:53.474903 futex(0x7efc2802c67c, FUTEX_WAIT_PRIVATE, 0, NULL
2613 13:04:53.474911 futex(0x7efc2802aa28, FUTEX_WAKE_PRIVATE, 1) = 0
2613 13:04:53.474932 futex(0x7efc283e7a78, FUTEX_WAKE_PRIVATE, 1
2924 13:04:53.474944 ) = 0
2613 13:04:53.474952 ) = 1
2924 13:04:53.474959 futex(0x7efc283e7a28, FUTEX_WAIT_PRIVATE, 2, NULL
2613 13:04:53.474967 futex(0x7efc283e7a28, FUTEX_WAKE_PRIVATE, 1
2924 13:04:53.474975 ) = -1 EAGAIN (Resource temporarily unavailable)
2613 13:04:53.474983 ) = 0
2924 13:04:53.474991 futex(0x7efc283e7a28, FUTEX_WAKE_PRIVATE, 1
2613 13:04:53.474999 futex(0x7efc2802aa78, FUTEX_WAIT_PRIVATE, 0, NULL
2924 13:04:53.475006 ) = 0
2924 13:04:53.475024 futex(0x7efc2802727c, FUTEX_WAKE_PRIVATE, 1) = 1
2611 13:04:53.475043 ) = 0
2924 13:04:53.475051 futex(0x7efc283e7a7c, FUTEX_WAIT_PRIVATE, 0, NULL
2611 13:04:53.475059 futex(0x7efc28027228, FUTEX_WAKE_PRIVATE, 1) = 0
2611 13:04:53.475080 futex(0x7efc28028e78, FUTEX_WAKE_PRIVATE, 1
filiprem
(6747 rep)
Mar 27, 2019, 10:52 AM
• Last activity: Sep 21, 2020, 09:02 AM
1
votes
2
answers
1828
views
How to deal with a person that may have multiple alias names?
I am designing a neo4j database which will be primarily be used to store information about people. One of the requirements is that it must be able to store an alias name of a person. I'm having trouble deciding how I should design my database so that it deals with people that have multiple aliases?...
I am designing a neo4j database which will be primarily be used to store information about people. One of the requirements is that it must be able to store an alias name of a person.
I'm having trouble deciding how I should design my database so that it deals with people that have multiple aliases?
I have two solutions:
1. Store the persons alias names in a single field, comma separated. This is simple, but makes query-ing difficult.
2. Store a persons alias name in a separate table/entity called
alias
. This seems to be overly-complex but makes querying easier than option 1.
Which of these, if any, is the correct answer to the problem? Or alternatively, is there a better solution which I am overlooking?
JavascriptLoser
(139 rep)
Jul 11, 2018, 02:56 AM
• Last activity: Jul 24, 2020, 07:05 PM
0
votes
1
answers
33
views
layout neo4j graph based on points
I have a neo4j database with some data stored in Points (e.g. `coordinates: point({srid:4326, x:-4.224099535439225, y:52.10621881941465})`). I want the neo4j browser to honour these points when laying out the graph visually. Is there a way to interact with its layout manager, and ask it to use these...
I have a neo4j database with some data stored in Points (e.g.
coordinates: point({srid:4326, x:-4.224099535439225, y:52.10621881941465})
).
I want the neo4j browser to honour these points when laying out the graph visually. Is there a way to interact with its layout manager, and ask it to use these points?
simonalexander2005
(101 rep)
Jan 8, 2020, 10:48 AM
• Last activity: Jan 8, 2020, 12:06 PM
1
votes
0
answers
29
views
in Neo4j my Cypher query pulls data from most recently loaded object, not the one specified
I believe this code demonstrates a bug in this software. But given my lack of experience with Neo4j, maybe something is coded incorrectly. I would like to know if the unexpected output is due to a bug. Given 2 tiny **.csv** files, I load these data and when I query each loaded object the data seems...
I believe this code demonstrates a bug in this software. But given my lack of experience with Neo4j, maybe something is coded incorrectly.
I would like to know if the unexpected output is due to a bug.
Given 2 tiny **.csv** files, I load these data and when I query each loaded object the data seems ok.
Then when I query object **T7** it shows the data from object **T8**.
Thanks!
# Steps to reproduce the behaviour
## File t7.csv
col1
t7 1 row
## File t8.csv
col1
t8 2 rows
t8 2 rows
I then login with the following command:
$ bin/cypher-shell -u neo4j -p my password
I then run the following commands:
MATCH (T7)
OPTIONAL MATCH (T7)-[r]-()
WITH T7,r LIMIT 5000
DELETE T7,r
RETURN count(T7) as deletedNodesCount;
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:///t7.csv" as line
CREATE (T7 {col1:line.col1});
MATCH T7 = (t7)
RETURN t7.col1;
The above commands load the data and return the values found as follows:
> +------------+
> | a.col1 |
> +------------+
> | "t7 1 row" |
> +------------+
I perform the same task for the T8 data:
MATCH (T8)
OPTIONAL MATCH (T8)-[r]-()
WITH T8,r LIMIT 5000
DELETE T8,r
RETURN count(T8) as deletedNodesCount;
LOAD CSV WITH HEADERS FROM "file:///t8.csv" as line
CREATE (T8 {col1:line.col1});
MATCH T8 = (t8)
RETURN t8.col1;
The above commands load the data and return the values found as follows:
> +-------------+
> | a.col1 |
> +-------------+
> | "t8 2 rows" |
> | "t8 2 rows" |
> +-------------+
Now if I run the following command, this gets data for most recent query, but not the correct node?
MATCH T7 = (t7)
RETURN t7.col1;
I expected this output:
> +------------+
> | a.col1 |
> +------------+
> | "t7 1 row" |
> +------------+
Instead I received the following output:
> +-------------+
> | a.col1 |
> +-------------+
> | "t8 2 rows" |
> | "t8 2 rows" |
> +-------------+
## Version information
### Neo4j
call dbms.components() yield name, versions, edition unwind versions as version return name, version, edition;
> +----------------------------------------+
> | name | version | edition |
> +----------------------------------------+
> | "Neo4j Kernel" | "3.5.1" | "community" |
> +----------------------------------------+
### OS
Mac OS 10.13.6
drum1440
(11 rep)
Dec 8, 2019, 08:07 PM
• Last activity: Dec 11, 2019, 12:23 PM
0
votes
1
answers
382
views
How to design a graph database in this scenario?
here is my scenario. I have a **pre-defined data type structure** for books. Just take it as an example for the sake of simplicity. The structure looks like the image below. It's a Labeled Property Graph and the information is self-explained. **This data type structure is fixed**, I cannot change it...
here is my scenario. I have a **pre-defined data type structure** for books. Just take it as an example for the sake of simplicity. The structure looks like the image below. It's a Labeled Property Graph and the information is self-explained. **This data type structure is fixed**, I cannot change it. I just use it.
When there is 1 book, let's call it *Harry Potter*, in the system, it might look like below:
So, the book has its own property (ID, Name,...) and also contains a field type
In this case, there is another book called *Graph DB*, with those information as highlighted.
The problem of this design is: **we don't know which information belong to which book**. For example, we cannot distinguish the


MandatoryData
. By looking at this graph, we can know all information about the book.
The problem happens when I have 2 books in the system, which looks like this:

publishedYear
anymore.
My question is: how to solve or avoid this problem? Should I create 1 MandatoryData
for each book? Could you propose me any design?
I'm using Neo4j and Cypher. Thank you for your help!
Triet Doan
(173 rep)
Nov 22, 2017, 02:55 PM
• Last activity: Aug 22, 2019, 01:35 AM
Showing page 1 of 20 total questions