Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

2 votes
3 answers
7478 views
Parse value from different JSON strings (No jq)
I'm using a bash script that needs to read the JSON output and parse a value from different JSON variables or strings. Here's the sample output. It needs to read the value next to the `Content` or from any other variable. Such as, Lookup `Content` and be able to print `Value1`. Lookup `DeviceType` a...
I'm using a bash script that needs to read the JSON output and parse a value from different JSON variables or strings. Here's the sample output. It needs to read the value next to the Content or from any other variable. Such as, Lookup Content and be able to print Value1. Lookup DeviceType and be able to print Value4. Sample Input:
{"Content":"Value1","CreationMethod":"Value2","database":"Value3","DeviceType":"Value4"}
I tried the combination of sed and awk:
sed 's/["]/ /g' | awk '{print $4}'
... but only if the position of Content remains the same in the output. Otherwise, in the different JSON output, the positioning of Content changes that puts the value out of scope thus, awk '{print $4}' picks up the wrong value.
Riz (59 rep)
Nov 8, 2019, 03:46 PM • Last activity: Aug 3, 2025, 12:04 AM
4 votes
2 answers
2385 views
jq: Printing multiple values from multiple arrays at once
The default functionality of `jq` is to send each object from an array one at a time, though the `join` operator can merge those values. My problem is in trying to print all the values from multiple arrays at once. Taking this example: ```json { "key1": { "list1": [ "val1", "val2", "val3" ] }, "key2...
The default functionality of jq is to send each object from an array one at a time, though the join operator can merge those values. My problem is in trying to print all the values from multiple arrays at once. Taking this example:
{
    "key1": {
        "list1": [
            "val1",
            "val2",
            "val3"
        ]
    },
    "key2": {
        "list1": [
            "val4",
            "val5"
        ]
    },
    "key3": {
        "list1": [
            "val6"
        ]
    }
}
I'd like to print:
val1 val2 val3 val4 val5 val6
And so far have this:
jq -r 'to_entries[] | { list: .value.list1 } | .list | join(" ")' test.json
*(More verbose than necessary to help reviewers.)* which gives:
val1 val2 val3
val4 val5
val6
Is there a way to gather all the values together in one command?
T145 (223 rep)
Apr 27, 2024, 07:33 PM • Last activity: Jul 21, 2025, 12:35 PM
2 votes
3 answers
2013 views
Convert SQLite CSV output to JSON
I want to format SQLite output in JSON format from the command line. Currently, I have CSV output that looks like this: label1,value1 label2,value2 label3,value3 ... Now I'd like to have it formatted like this: {'label1' : 'value1', 'label2': 'value2', ... } Thanks!
I want to format SQLite output in JSON format from the command line. Currently, I have CSV output that looks like this: label1,value1 label2,value2 label3,value3 ... Now I'd like to have it formatted like this: {'label1' : 'value1', 'label2': 'value2', ... } Thanks!
michelemarcon (3593 rep)
Jan 13, 2016, 03:45 PM • Last activity: Jun 12, 2025, 08:10 PM
0 votes
1 answers
2681 views
How to send with curl JSON from another curl command output
I want to get JSON with curl command, so with below command I am getting output: curl -GET http://localhost:9200/oldindex/_mapping?pretty { "gl-events_1" : { "mappings" : { "message" : { "dynamic" : "false", "dynamic_templates" : [ { "fields" : { "path_match" : "fields.*", "mapping" : { "doc_values"...
I want to get JSON with curl command, so with below command I am getting output: curl -GET http://localhost:9200/oldindex/_mapping?pretty { "gl-events_1" : { "mappings" : { "message" : { "dynamic" : "false", "dynamic_templates" : [ { "fields" : { "path_match" : "fields.*", "mapping" : { "doc_values" : true, "index" : true, "type" : "keyword" } } } ], "properties" : { "alert" : { "type" : "boolean" }, "event_definition_id" : { "type" : "keyword" }, "event_definition_type" : { "type" : "keyword" }, "fields" : { "type" : "object", "dynamic" : "true" }, "id" : { "type" : "keyword" }, "key" : { "type" : "keyword" }, "key_tuple" : { "type" : "keyword" }, "message" : { "type" : "text", "norms" : false, "fields" : { "keyword" : { "type" : "keyword" } }, "analyzer" : "standard" }, "origin_context" : { "type" : "keyword" }, "priority" : { "type" : "long" }, "source" : { "type" : "keyword" }, "source_streams" : { "type" : "keyword" }, "streams" : { "type" : "keyword" }, "timerange_end" : { "type" : "date", "format" : "yyyy-MM-dd HH:mm:ss.SSS" }, "timerange_start" : { "type" : "date", "format" : "yyyy-MM-dd HH:mm:ss.SSS" }, "timestamp" : { "type" : "date", "format" : "yyyy-MM-dd HH:mm:ss.SSS" }, "timestamp_processing" : { "type" : "date", "format" : "yyyy-MM-dd HH:mm:ss.SSS" }, "triggered_jobs" : { "type" : "keyword" } } } } } } Now i want to store this output as json file so I copied it in file and gave extension as .json But when i try to put with curl I am getting below error curl -X PUT http://localhost:9200/new_good -H 'Content-Type: application/json' -d sampl.json {"error":{"root_cause":[{"type":"not_x_content_exception","reason":"Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"}],"type":"not_x_content_exception","reason":"Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"},"status":500} But when i run below command with same json format directly it works, curl -X PUT \ http://localhost:9200/new_good \ -H 'Content-Type: application/json' \ -d '{"mappings" : { "message" : { "dynamic_templates" : [ { "internal_fields" : { "match" : "gl2_*", "match_mapping_type" : "string", "mapping" : { "type" : "keyword" } } }, { "store_generic" : { "match_mapping_type" : "string", "mapping" : { "type" : "keyword" } } } ], "properties" : { "LoggerName" : { "type" : "keyword" }, "MessageParam0" : { "type" : "keyword" }, "MessageParam1" : { "type" : "long" }, "MessageParam2" : { "type" : "keyword" }, "MessageParam3" : { "type" : "keyword" }, "MessageParam4" : { "type" : "keyword" }, "MessageParam5" : { "type" : "keyword" }, "MessageParam6" : { "type" : "keyword" }, "MessageParam7" : { "type" : "keyword" }, "MessageParam8" : { "type" : "keyword" }, "Severity" : { "type" : "keyword" }, "SourceClassName" : { "type" : "keyword" }, "SourceMethodName" : { "type" : "keyword" }, "SourceSimpleClassName" : { "type" : "keyword" }, "StackTrace" : { "type" : "keyword" }, "Thread" : { "type" : "keyword" }, "Time" : { "type" : "keyword" }, "facility" : { "type" : "keyword" }, "full_message" : { "type" : "text", "analyzer" : "standard" }, "gl2_accounted_message_size" : { "type" : "long" }, "gl2_message_id" : { "type" : "keyword" }, "gl2_processing_timestamp" : { "type" : "date", "format" : "yyyy-MM-dd HH:mm:ss.SSS" }, "gl2_receive_timestamp" : { "type" : "date", "format" : "yyyy-MM-dd HH:mm:ss.SSS" }, "gl2_remote_ip" : { "type" : "keyword" }, "gl2_remote_port" : { "type" : "long" }, "gl2_source_input" : { "type" : "keyword" }, "gl2_source_node" : { "type" : "keyword" }, "level" : { "type" : "long" }, "message" : { "type" : "text", "analyzer" : "standard" }, "source" : { "type" : "text", "analyzer" : "analyzer_keyword", "fielddata" : true }, "streams" : { "type" : "keyword" }, "timestamp" : { "type" : "date", "format" : "yyyy-MM-dd HH:mm:ss.SSS" } } } } } }' What I want is store curl GET command output as valid json which I can use in curl PUT, curl get > some.json curl put -d some.json I am new to this and i tried several options with jq as well but that also didn't workd for me. Please guide me here. Regards SAM
Samurai (95 rep)
Jun 1, 2022, 06:23 AM • Last activity: Jun 4, 2025, 12:03 PM
3 votes
3 answers
4652 views
Extract json array element based on a subelement value
We have the following example file ( very long file , this is short example ) "request_status" : "FAILED" { "href" : "http://localhost:8080/api/v1/clusters/sys41/requests/333", "Requests" : { "cluster_name" : "sys41", "id" : 333, "request_status" : "COMPLETED" } }, { "href" : "http://localhost:8080/...
We have the following example file ( very long file , this is short example ) "request_status" : "FAILED" { "href" : "http://localhost:8080/api/v1/clusters/sys41/requests/333", "Requests" : { "cluster_name" : "sys41", "id" : 333, "request_status" : "COMPLETED" } }, { "href" : "http://localhost:8080/api/v1/clusters/sys41/requests/334", "Requests" : { "cluster_name" : "sys41", "id" : 334, "request_status" : "FAILED" } }, { "href" : "http://localhost:8080/api/v1/clusters/sys41/requests/335", "Requests" : { "cluster_name" : "sys41", "id" : 335, "request_status" : "FAILED" } }, { "href" : "http://localhost:8080/api/v1/clusters/sys41/requests/336", "Requests" : { "cluster_name" : "sys41", "id" : 336, "request_status" : "COMPLETED" } } how to print the line after the line that matches "id" : $num e.g. for num=335 how to get the line after "id" : $num Expected output "request_status" : "FAILED"
yael (13936 rep)
Mar 5, 2018, 05:09 PM • Last activity: May 4, 2025, 08:48 AM
6 votes
5 answers
10541 views
Remove trailing commas from invalid json (to make it valid)
Let's say I have a file as below { "fruit": "Apple", } I want to remove the comma at the end of the line, if and only if the next line contains "}". So, the output will be : { "fruit": "Apple" } However, if the file is as below. I do not want to do any change. Since the `,`s are not followed by a `}...
Let's say I have a file as below { "fruit": "Apple", } I want to remove the comma at the end of the line, if and only if the next line contains "}". So, the output will be : { "fruit": "Apple" } However, if the file is as below. I do not want to do any change. Since the ,s are not followed by a } { "fruit": "Apple", "size": "Large", "color": "Red" } Anything with sed would be fantastic.
Somy (181 rep)
Nov 29, 2018, 07:45 PM • Last activity: Apr 28, 2025, 06:58 AM
1 votes
2 answers
1532 views
awk script to delete blocks in json
I have a newline delimited JSON file with entries like this: {"id":"eprints.ulster.ac.uk/view/year/2015.html","title":"Items where Year is 2015 - Ulster Institutional Repository","url":"eprints.ulster.ac.uk/view/year/2015.html"} {"id":"eprints.ulster.ac.uk/view/year/2016.html","title":"Items where Y...
I have a newline delimited JSON file with entries like this: {"id":"eprints.ulster.ac.uk/view/year/2015.html","title":"Items where Year is 2015 - Ulster Institutional Repository","url":"eprints.ulster.ac.uk/view/year/2015.html"} {"id":"eprints.ulster.ac.uk/view/year/2016.html","title":"Items where Year is 2016 - Ulster Institutional Repository","url":"eprints.ulster.ac.uk/view/year/2016.html"} {"id":"eprints.ulster.ac.uk/view/year/2017.html","title":"Items where Year is 2017 - Ulster Institutional Repository","url":"eprints.ulster.ac.uk/view/year/2017.html"} {"id":"eprints.ulster.ac.uk/10386/","title":"Structural performance of rotationally restrained steel columns in fire - Ulster Institutional Repos","url":"eprints.ulster.ac.uk/10386/"} {"id":"eprints.ulster.ac.uk/10387/","title":"Determining the Effective Length of Fixed End Steel Columns in Fire - Ulster Institutional Repositor","url":"eprints.ulster.ac.uk/10387/"} I only want blocks where the .id does not begin with "eprints.ulster.ac.uk/view/" So if the script was run on the above code snippet, the first 3 blocks would be deleted and the only blocks remaining would be: {"id":"eprints.ulster.ac.uk/10386/","title":"Structural performance of rotationally restrained steel columns in fire - Ulster Institutional Repos","url":"eprints.ulster.ac.uk/10386/"} {"id":"eprints.ulster.ac.uk/10387/","title":"Determining the Effective Length of Fixed End Steel Columns in Fire - Ulster Institutional Repositor","url":"eprints.ulster.ac.uk/10387/"} Can anybody help write an awk script to do this?
KoreMike (25 rep)
Oct 3, 2015, 03:07 PM • Last activity: Apr 13, 2025, 11:24 AM
3 votes
2 answers
1779 views
Convert json numbers to strings in the shell
When parsing *json*, the command-line tool `jshon` converts numbers to scientific notation, and sometimes [tries to round them][1]. To avoid these problems, I want jshon to consider these numbers as strings. For that, I have found that I need to place quotes around all numbers in the json file. Afte...
When parsing *json*, the command-line tool jshon converts numbers to scientific notation, and sometimes tries to round them . To avoid these problems, I want jshon to consider these numbers as strings. For that, I have found that I need to place quotes around all numbers in the json file. After some unsuccessful googling, I have tried writing a sed command to quote the numbers, but I found it pretty unsafe, and have run into lots of issues already: sed -r 's/(" ?[:,] ?)"?([0-9]+(\.[0-9]+)?)"?([,}]|$)/\1"\2"\4/g' $file I would like to know if there is some stable **parser** that can give me the desired result. I'm not including an example json file in the question, because I need this code for some little risky operations, and I will be parsing json from random websites.
admirabilis (4792 rep)
Jul 14, 2014, 12:26 AM • Last activity: Apr 13, 2025, 10:49 AM
1 votes
2 answers
119 views
Extracting two (or more) related values from an array of JSON objects
Consider a contrived example using a JSON object such as this, where I want to extract the related `id`, `firstname`, and `lastname` fields for each of many array objects into shell variables for further (non-JSON) processing. ```lang-json { "customers": [ { "id": 1234, "firstname": "John", "lastnam...
Consider a contrived example using a JSON object such as this, where I want to extract the related id, firstname, and lastname fields for each of many array objects into shell variables for further (non-JSON) processing.
-json
{
  "customers": [
    {
      "id": 1234,
      "firstname": "John",
      "lastname": "Smith",
      "other": "fields",
      "are": "present",
      "here": "etc."
    },
    {
      "id": 2468,
      "firstname": "Janet",
      "lastname": "Green",
      "other": "values",
      "are": "probably",
      "here": "maybe"
    }
  ]
}
For simple data I can use this,
-bash
jq -r '.customers[] | (.id + " " + .firstname + " " + .lastname)' 
Output
-none
1234 -- John -- Smith
2468 -- Janet -- Green
but of course this will fail with double-barrelled firstname values such as Anne Marie. Changing the separator to another character such as # feels more like a fudge than a solution but could be acceptable. For more complex situations I might pick out the list of id values and then trade speed for accuracy by going back to extract the corresponding firstname and lastname elements. Something like this:
-bash
jq -r '.customers[].id' 
Output
-none
1234 -- John -- Smith
2468 -- Janet -- Green
However, neither of these is both correct and efficient. While I'm not going to be running the real code at a high frequency, I'd like to understand if there is a more appropriate way of getting multiple data elements safely and efficiently out of a JSON object structure and into shell variables?
Chris Davies (126603 rep)
Mar 10, 2025, 11:05 PM • Last activity: Mar 11, 2025, 01:39 PM
1 votes
3 answers
973 views
Update file with multiple values automatically
I have a hosted zone and record set that route to multiple addresses. I'd like to update the record set with adding or removing one IP address in the list. Unfortunately, AWS CLI doesn't provide the option of deleting/adding the value of resource record in route53 { "Comment": "Update the A record s...
I have a hosted zone and record set that route to multiple addresses. I'd like to update the record set with adding or removing one IP address in the list. Unfortunately, AWS CLI doesn't provide the option of deleting/adding the value of resource record in route53 { "Comment": "Update the A record set", "Changes": [ { "Action": "UPSERT", "ResourceRecordSet": { "Name": "mydomain.com", "Type": "A", "TTL": 300, "ResourceRecords": [ { "Value": "XX.XX.XX.XX" } ] } } ] } I can add multiple IP addresses into your json like this manually. ***But I want to add multiple IPs to the json file using bash automatically.*** { "Comment": "Update the A record set", "Changes": [{ "Action": "UPSERT", "ResourceRecordSet": { "Name": "mydomain.com", "Type": "A", "TTL": 300, "ResourceRecords": [{ "Value": "XX.XX.XX.XX" }, { "Value": "XX.XX.XX.XX" } ] } }] }
Nani (373 rep)
Feb 13, 2019, 07:27 PM • Last activity: Mar 10, 2025, 10:16 PM
1 votes
3 answers
226 views
How could I (painlessly) split or reverse "Last, First" within a record in Miller?
I have a tab-delimited file where one of the columns is in the format "LastName, FirstName". What I want to do is split that record out into two separate columns, `last`, and `first`, use `cut` or some other verb(s) on _that_, and output the result to JSON. I should add that I'm not married to JSON,...
I have a tab-delimited file where one of the columns is in the format "LastName, FirstName". What I want to do is split that record out into two separate columns, last, and first, use cut or some other verb(s) on _that_, and output the result to JSON. I should add that I'm not married to JSON, and I know how to use other tools like [jq](https://github.com/stedolan/jq) , but it would be nice to get it in that format in one step. The syntax for the nest verb looks like it requires memorizing a lot of frankly non-memorable options, so I figured that there would be a simple DSL operation to do this job. Maybe that's not the case? Here's what I've tried. (Let's just forget about the extra space that's attached to Firstname right now, OK? I would use strip or ssub or something to get rid of that later.)
echo -e "last_first\nLastName, Firstname" \
  | mlr --t2j put '$o=splitnv($last_first,",")'

# result:
# { "last_first": "LastName, Firstname", "o": "(error)" }

# expected something like:
# { "last_first": "LastName, Firstname", "o": { 1: "LastName", 2: "Firstname" } }
#
# or:
# { "last_first": "LastName, Firstname", "o": [ "LastName", "Firstname" ] }
Why (error)? Is it not reasonable that assigning to $o as above would assign a new column o to the result of splitnv? Here's something else I tried that didn't work like I would've expected either:
echo -e "last_first\nLastName, Firstname" \
  | mlr -T nest --explode --values --across-fields --nested-fs , -f last_first

# result (no delimiter here, just one field, confirmed w/ 'cat -A')
# last_first
# LastName, Firstname

# expected:
# last_first_1last_first_2
# LastName, Firstname
**Edit**: The problem with the command above is I should've used --tsv, **not** -T, which is a synonym for --nidx --fs tab (numerically-indexed columns). Problem is, Miller doesn't produce an error message when it's obviously wrong to ask for named columns in that case, which might be a mis-feature; see [issue #233](https://github.com/johnkerl/miller/issues/233) . Any insight would be appreciated.
Kevin E (540 rep)
Mar 7, 2019, 10:52 AM • Last activity: Mar 10, 2025, 10:05 PM
2 votes
1 answers
82 views
Append to matching children of an arbitrarily deep array
I am attempting to edit an OpenAPI specification by changing all parameters to be nullable. A parameter definition looks like this: ```json { "name": "foo", "required": true, "type": "string" } ``` They are contained in a `parameters` array that can be anywhere in the document. What I need is to app...
I am attempting to edit an OpenAPI specification by changing all parameters to be nullable. A parameter definition looks like this:
{
    "name": "foo",
    "required": true,
    "type": "string"
}
They are contained in a parameters array that can be anywhere in the document. What I need is to append "x-nullable": true to any parameter containing a type property. Sample data:
{
    "parameters": [
        {"notype": "x"},
        {"type": "x"}
    ],
    "foo": {
        "parameters": [
            {"type": "y"}
        ]
    },
    "bar": {
        "baz": {
            "parameters": [
                {"type": "z"}
            ]
        }
    }
}
Desired output:
{
    "parameters": [
        {"notype": "x"},
        {
            "type": "x",
            "x-nullable": true
        }
    ],
    "foo": {
        "parameters": [
            {
                "type": "y",
                "x-nullable": true
            }
        ]
    },
    "bar": {
        "baz": {
            "parameters": [
                {
                    "type": "z",
                    "x-nullable": true
                }
            ]
        }
    }
}
The closest I could get was [this](https://play.jqlang.org/s/Xx-ep42wRLGfU8R) :
.. | (.parameters[] | select(.type)) += {"x-nullable":true}
It successfully changes one of the items in my test document, but the results are inconsistent and seem to be based on the structure I choose for sample data.
miken32 (588 rep)
Feb 28, 2025, 05:12 PM • Last activity: Feb 28, 2025, 07:59 PM
3 votes
2 answers
492 views
Updating JSON document with JSON object embedded in part of JSON string
I have something like the following ```json { "t": "set foo='{\"mode\":1}'" } ``` And I'd like to transform it into something like ```json { "t": "set foo='{\"mode\":1}'", "mode": 1 } ``` ... where the key `mode` and the value `1` are taken from the embedded JSON in the value of the `t` key. I am ex...
I have something like the following
{
  "t": "set foo='{\"mode\":1}'"
}
And I'd like to transform it into something like
{
  "t": "set foo='{\"mode\":1}'",
  "mode": 1
}
... where the key mode and the value 1 are taken from the embedded JSON in the value of the t key. I am executing several commands to make this happen, but I would like to see if putting it into one jq call is possible.
winmutt (131 rep)
Dec 7, 2018, 07:57 PM • Last activity: Feb 25, 2025, 11:28 AM
1 votes
2 answers
314 views
Updating and removing values in JSON array based on matching with placeholder values
I search for an object with a specific property and then update another property. Given this input: ```json [ { "replacements": [ { "newValue": "0", "yamlPath": "k8s-helm-templates.deployment.containers.abc.image.tag" }, { "newValue": "0", "yamlPath": "k8s-helm-templates.deployment.containers.def.im...
I search for an object with a specific property and then update another property. Given this input:
[
  {
    "replacements": [
      {
        "newValue": "0",
        "yamlPath": "k8s-helm-templates.deployment.containers.abc.image.tag"
      },
      {
        "newValue": "0",
        "yamlPath": "k8s-helm-templates.deployment.containers.def.image.tag"
      },
      {
        "newValue": "0",
        "yamlPath": "k8s-helm-templates.deployment.containers.ghi.image.tag"
      }
    ],
    "yamlFilePath": "k8s/helm/Dev/us-east-1/values.yaml"
  }
]
And I have the placeholders available as:
[
  {"name":"abc-image-tag","value":"123"},
  {"name":"def-image-tag","value":"456"}
]
Here, abc-image-tag corresponds to k8s-helm-templates.deployment.containers.abc.image.tag, etc. This should result in a result where the given values are correctly replaced and 0-values are filtered, such as this:
[
  {
    "replacements": [
      {
        "newValue": "123",
        "yamlPath": "k8s-helm-templates.deployment.containers.abc.image.tag"
      },
      {
        "newValue": "456",
        "yamlPath": "k8s-helm-templates.deployment.containers.def.image.tag"
      }
    ],
    "yamlFilePath": "k8s/helm/Dev/us-east-1/values.yaml"
  }
]
I tried several tips and dug into the documentation but couldn't get it to work. Is this even possible with jq? Additional steps with bash would be ok.
Jazzschmidt (353 rep)
Dec 14, 2023, 02:35 PM • Last activity: Feb 25, 2025, 10:34 AM
3 votes
1 answers
1247 views
Extract data from JSON document embedded in another JSON document
I have a JSON document that embeds another JSON document. I need to extract data from the embedded document, but I'm not well versed with JSON or with `jq`. The following is my input (`guard.json`): ```json { "_index": "test-2021.06.02", "_type": "servrd", "_id": "ZWUxMDU5MjItOGY2MC00MGI5LWJhZTEtODR...
I have a JSON document that embeds another JSON document. I need to extract data from the embedded document, but I'm not well versed with JSON or with jq. The following is my input (guard.json):
{
  "_index": "test-2021.06.02",
  "_type": "servrd",
  "_id": "ZWUxMDU5MjItOGY2MC00MGI5LWJhZTEtODRhYWQ1YTZhOGIy",
  "_version": 1,
  "_score": null,
  "_source": {
    "stream": "stdout",
    "time": "2021-10-02T03:13:52.496705721Z",
    "docker": {
      "container_id": "392923402349320329432h3432k4kj32kfks9s9sdfksdfjkdsjfsh3939322342"
    },
    "kubernetes": {
      "container_name": "test",
      "namespace_name": "dev",
      "pod_name": "test-dev-v004-9s885",
      "container_image": "localhost:80/pg/test:1.19.0",
      "container_image_id": "docker-pullable://localhost:80/pg/test@sha256:2sfdsfsfsfsfdsr3wrewrewc235e728ad1b29baf5af3dfad30c7706e5eda38b6109",
      "pod_id": "ssfsfds-dsfdsfs-fs-sfsfs-sfdsfsdfsewrw",
      "host": "test-jup-node008",
      "labels": {
        "app": "test",
        "cluster": "test-dev",
        "load-balancer-test-dev": "true",
        "stack": "dev",
        "app_kubernetes_io/managed-by": "spinnaker",
        "app_kubernetes_io/name": "test",
        "moniker_spinnaker_io/sequence": "4"
      },
      "min_url": "https://100.400.0.22:443/api ",
      "namespace_id": "jajdjsdf-dfse-dsf-koksls-sksjf030292",
      "namespace_labels": {
        "name": "dev"
      }
    },
    "elapsedTime": 39013,
    "message": "TransactionLog",
    "membersId": "TEST_0233203203_030202020303",
    "payload": "{\"serviceId\":\"00343\",\"AccessKey\":\"testdfsolpGS\",\"trackID\":\"KOLPSLSLSLL99029283\",\"membersId\":\"TEST_0233203203_030202020303\",\"shopperInfo\":{\"emailAddress\":\"test.ooo4@yahoo.com\",\"ipAddress\":\"localhost\"},\"parkid\":{\"parkssID\":\"carrier-park\"},\"cartinfo\":{\"checkType\":\"preorder\",\"detailsmetis\":\"card\",\"currency\":\"US\",\"grosscount\":\"10\",\"reedeem\":\".00\",\"Discount\":\".00\",\"tokenvalue\":{\"token\":\"11102020392023920920393993\",\"Bin\":\"00000\",\"digit\":\"0000\",\"expirationDate\":\"202509\",\"price\":\"10\"}},\"cartdetails\":[{\"dones\":[{\"donesnames\":\"test\",\"price\":\"003\",\"quan\":\"1\"}]}]}",
    "requestDate": "2021-10-02T03:13:12.541207804Z",
    "requestId": "12321321wes-sfsfdsf-snnm79887-029299",
    "finalToClient": "{\"finalType\":\"ok\",\"evaluationMessage\":\"Accept\",\"subMessage\":\"testcallled\",\"score\":0}",
    "serviceId": 00343,
    "timestamp": "2021-10-02T03:13:51.951+00:00",
    "@timestamp": "2021-10-02T03:13:52.621643399+00:00"
  },
  "fields": {
    "@timestamp": [
      "2021-10-02T03:13:52.621Z"
    ],
    "requestDate": [
      "2021-10-02T03:13:12.541Z"
    ],
    "timestamp": [
      "2021-10-02T03:13:51.951Z"
    ]
  },
  "highlight": {
    "kubernetes.labels.app": [
      "@kibana-highlighted-field@test@/kibana-highlighted-field@"
    ]
  },
  "sort": [
    1654139632621
  ]
}
I need output in CSV format, similar to this:
serviceId, trackID, currency, grosscount
00343,KOLPSLSLSLL99029283,US,10
SRash (111 rep)
Jun 2, 2022, 07:39 AM • Last activity: Feb 25, 2025, 07:26 AM
8 votes
3 answers
797 views
How can I make jq assign values to environment variables in one command?
Say, I provide `jq` with the following JSON body as input: {"person1": {"name": "foo"}, "person2": {"name": "bar"}} Is it possible to have `jq` assign `person1.name` to environment variable `PERSON_1_NAME` and `person2.name` to environment variable `PERSON_2_NAME`, so that I don't have to run `jq` m...
Say, I provide jq with the following JSON body as input: {"person1": {"name": "foo"}, "person2": {"name": "bar"}} Is it possible to have jq assign person1.name to environment variable PERSON_1_NAME and person2.name to environment variable PERSON_2_NAME, so that I don't have to run jq multiple times with the same input, e.g.: INPUT='{"person1": {"name": "foo"}, "person2": {"name": "bar"}}' export PERSON_1_NAME=$(echo $INPUT | jq -r .person1) ?
Shuzheng (4931 rep)
Feb 18, 2025, 07:39 PM • Last activity: Feb 20, 2025, 07:35 PM
2 votes
2 answers
1155 views
Conditionally add values to JSON arrays
I have some JSON objects that I need to add a value to if it doesn't already exist. Each object will be the following except the `contact_group` inside each array (1-5) will vary: ```json { "contact_groups": { "1": [ "/contact_group/78" ], "2": [ "/contact_group/79" ], "3": [], "4": [], "5": [] } }...
I have some JSON objects that I need to add a value to if it doesn't already exist. Each object will be the following except the contact_group inside each array (1-5) will vary:
{
  "contact_groups": {
    "1": [
      "/contact_group/78"
    ],
    "2": [
      "/contact_group/79"
    ],
    "3": [],
    "4": [],
    "5": []
  }
}
I want to add /contact_group/109 to each array if it doesn't already exist, so the above would become:
{
  "contact_groups": {
    "1": [
      "/contact_group/78",
      "/contact_group/109"
    ],
    "2": [
      "/contact_group/79",
      "/contact_group/109"
    ],
    "3": [
      "/contact_group/109"
    ],
    "4": [
      "/contact_group/109"
    ],
    "5": [
      "/contact_group/109"
    ]
  }
}
I'm pretty sure jq can do this, but I'm inexperienced with it, so I don't know where to begin. Does anyone know how/if this can be done?
jesse_b (41447 rep)
Jul 16, 2020, 05:06 PM • Last activity: Feb 20, 2025, 09:38 AM
1 votes
3 answers
751 views
Can I use sed to insert a pattern into a file?
I want to generate a file that starts in the format: # topics-to-move.json { "topics": [], "version": 1 } with no `{ "topic": " " }` entries in `"topics":[]`. I can grab the topic names from another script, which would give me a clean list of: Topic..A Topic..B Topic..C I'd like to be able to insert...
I want to generate a file that starts in the format: # topics-to-move.json { "topics": [], "version": 1 } with no { "topic": "" } entries in "topics":[]. I can grab the topic names from another script, which would give me a clean list of: Topic..A Topic..B Topic..C I'd like to be able to insert each of the above into a file topics-to-move.json, in the format of {"topic":"Topic..A"},{"topic":"Topic..B"},{"topic":"Topic..C"} -- can this be achieved using sed or something similar? For clarity, the final file should look like: # topics-to-move.json { "topics": [ {"topic":"Topic..A"}, {"topic":"Topic..B"}, {"topic":"Topic..C"} ], "version": 1 }
MrDuk (1657 rep)
Nov 3, 2017, 07:37 PM • Last activity: Feb 14, 2025, 12:36 PM
1 votes
1 answers
1948 views
Variable in curl adds backslashes to string
I am trying to use curl based on some variables to create customers in Stripe, but when I assign the token to a variable it is giving me an error on Stripe saying that it does not exist. However, if I put the text in directly it works. How can I use the `$TOKEN` variable, is there something changing...
I am trying to use curl based on some variables to create customers in Stripe, but when I assign the token to a variable it is giving me an error on Stripe saying that it does not exist. However, if I put the text in directly it works. How can I use the $TOKEN variable, is there something changing the value that I don't realize? Michael$ curl https://api.stripe.com/v1/customers -u $access_token: -d source=tok_1CjvRiDZ5DqZ0yaUVWXXXXXX { "error": { "code": "token_already_used", "doc_url": "https://stripe.com/docs/error-codes/token-already-used ", "message": "You cannot use a Stripe token more than once: tok_1CjvRiDZ5DqZ0yaUVWXXXXXX.", "type": "invalid_request_error" } } Michael$ curl https://api.stripe.com/v1/customers -u $access_token: -d source=$TOKEN { "error": { "code": "resource_missing", "doc_url": "https://stripe.com/docs/error-codes/resource-missing ", "message": "No such token: \"tok_1CjvRiDZ5DqZ0yaUVWXXXXXX\"", "param": "source", "type": "invalid_request_error" } } $TOKEN is assigned like this OUTPUT="$(curl https://api.stripe.com/v1/tokens -u $access_token: -d customer=$external_customer_id)" TOKEN="$(echo $OUTPUT | jq .id)"
Michael St Clair (113 rep)
Jul 3, 2018, 09:11 PM • Last activity: Feb 14, 2025, 12:17 PM
5 votes
2 answers
366 views
Merging multiple JSON data blocks into a single entity
I'm using an API (from [SyncroMSP](https://api-docs.syncromsp.com/)) that returns paginated JSON data. I can obtain the number of pages, and I can obtain the data with a tool such as `curl`. Each chunk is valid JSON but it only contains a subset of the total data that I need. Using `jq` or otherwise...
I'm using an API (from [SyncroMSP](https://api-docs.syncromsp.com/)) that returns paginated JSON data. I can obtain the number of pages, and I can obtain the data with a tool such as curl. Each chunk is valid JSON but it only contains a subset of the total data that I need. Using jq or otherwise how can I merge the tickets[] elements of these paginated data chunks back into a single JSON document? Here are three example chunks. The tickets[] arrays are heavily edited for this question and in reality contain up to 25 entries, and each ticket entry contains many more elements including at least a couple of arrays. JSON example block 1 (part_1.json) { "tickets": [ { "number": 4445, "subject": "Your mailbox is almost full" }, { "number": 4444, "subject": "Cannot VPN" } ], "meta": { "total_pages": 3, "page": 1 } } JSON example block 2 (part_2.json) { "tickets": [ { "number": 4395, "subject": "Trados Studio issue" }, { "number": 4394, "subject": "Daily Backup Report(No Errors)" } ], "meta": { "total_pages": 3, "page": 2 } } JSON example block 3 (part_3.json) { "tickets": [ { "number": 4341, "subject": "Daily Backup Report(No Errors)" }, { "number": 4340, "subject": "Windows Updates on VMs" } ], "meta": { "total_pages": 3, "page": 3 } } In this case the expected result would be something like this: { "tickets": [ { "number": 4445, "subject": "Your mailbox is almost full" }, { "number": 4444, "subject": "Cannot VPN" }, { "number": 4395, "subject": "Trados Studio issue" }, { "number": 4394, "subject": "Daily Backup Report(No Errors)" }, { "number": 4341, "subject": "Daily Backup Report(No Errors)" }, { "number": 4340, "subject": "Windows Updates on VMs" } ] } The output could include the meta hash too, as I'd just ignore it, and it wouldn't matter which meta.page value was carried forward. You can assume that tickets[].number is unique and that you do not need to preserve any ordering at that tickets[] level. There's enough complexity in the real data that I don't want to have to declare the full JSON structure in any resulting code. This is my current attempt, but I'm not particularly strong with jq. Is there a better way - for example, not calling jq twice, or being able to generalise the code so that I don't need to specify the name of the top level array (tickets)? cat part_{1,2,3}.json | jq '.tickets[]' | jq -n '{ tickets:[ inputs ] }'
Chris Davies (126603 rep)
Dec 13, 2024, 03:18 PM • Last activity: Feb 14, 2025, 05:44 AM
Showing page 1 of 20 total questions