Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
3
votes
2
answers
1511
views
How do I run a database heavy query over jdbc without timing out (Oracle)?
For one of our applications (Pentaho Data Integration), we need to run queries through the oracle jdbc. Some of these queries are computation expensive and can take an hour or more to return any rows to the client. When I try to run one of the long running queries with default jdbc parameters, I ine...
For one of our applications (Pentaho Data Integration), we need to run queries through the oracle jdbc. Some of these queries are computation expensive and can take an hour or more to return any rows to the client. When I try to run one of the long running queries with default jdbc parameters, I inevitably get: "IO Error: Socket read timed out" after ~30 minutes even though the database is still processing the query (verified in V$SESSION and V$SQLAREA). I am using the newest ojdbc8.jar direct from Oracle and running Oracle Enterprise 12.1.0.2.0.
EDIT:
The output of these queries is being stored in a separate database. The nature of our environment is such that we have an arbitrary number of queries that could be created/run so using stored procedures would be cumbersome.
Andrew Hangsleben
(41 rep)
Feb 7, 2018, 09:12 PM
• Last activity: Feb 18, 2025, 06:03 PM
1
votes
1
answers
4148
views
How can I control how Pentaho DI formats integers coming from a text file?
I have a simple pipe-delimited text file with integers recorded as a sequence of numbers: 012345|00678 |various|text|fields There are some leading zeros, and some trailing white space, but no decimals, commas, internal spaces, or other formatting. Should be simple, right? Pentaho's field analysis co...
I have a simple pipe-delimited text file with integers recorded as a sequence of numbers:
012345|00678 |various|text|fields
There are some leading zeros, and some trailing white space, but no decimals, commas, internal spaces, or other formatting. Should be simple, right?
Pentaho's field analysis concludes that the fields are of type "Number", format "#.#", and precision "0". That format string makes me wonder if Pentaho thinks there might be decimals in the columns, but when I run Preview Rows, I get nice clean integers, without decimals, scientific notation, or leading zeros, exactly as I want it.
When I try to consume the data with a PostgreSQL Bulk Loader step, I get error messages like these:
Import Table.0 - ERROR {0} ERROR: invalid input syntax for integer: "5.017645082E9"
Import Table.0 - ERROR {0} ERROR: invalid input syntax for integer: "1.0"
It seems that Pentaho has taken it on itself to format the integers with scientific notation and/or fixed decimals, and naturally Postgres balks at importing this.
How can I stop Pentaho from formatting these integers?
I'm using Pentaho Data Integration v6.1, and Postgres 9.4.9. The file uses Windows line breaks and ANSI text. Changing the field format to "#" or "0" had no effect. Using a Table Output instead of a PostgreSQL Bulk Load works, but it is much slower.
Jon of All Trades
(5987 rep)
Mar 31, 2017, 10:50 PM
• Last activity: Feb 8, 2025, 02:01 PM
-2
votes
2
answers
9270
views
How to set up a connection in Pentaho Kettle for Oracle using a SERVICE NAME instead of SID?
![enter image description here][1] [1]: https://i.sstatic.net/WZcDP.png This error shows up when I try to connect to database string host:port/servicename.

dungeon_master
(1 rep)
Feb 11, 2014, 09:53 AM
• Last activity: Dec 28, 2023, 05:44 AM
2
votes
1
answers
670
views
Is it secure to store connection password on Pentaho Spoon?
I'm gonna use Spoon to transfer data from Postgres to MSSQL. I see that when we create connection its data is stored and we don't need to type password anymore. But I can't find where it stores this data. I looked on ktr file and didn't find username on it. I searched how it's stored but found no in...
I'm gonna use Spoon to transfer data from Postgres to MSSQL. I see that when we create connection its data is stored and we don't need to type password anymore.
But I can't find where it stores this data. I looked on ktr file and didn't find username on it.
I searched how it's stored but found no info. I can't approve its use without knowing that the password is securely stored and a hacker won't retrieve it.
Update: I found on the ktr XML file where login is saved, and password is saved encrypted. Is it secure?
I didn't see a private key being used to encrypt it. If it's using some synchronous key encryption, isn't anybody able to decrypt it? It may be even worse to have an encryption that's easy to decrypt than no encryption at all.
Hikari
(1603 rep)
Apr 28, 2022, 01:58 PM
• Last activity: Mar 31, 2023, 09:58 AM
-1
votes
1
answers
551
views
ERROR [XulParser] handler not found: W?NDOW
PDI(spoon.bat) crashes on startup. Here's /logs/spoon.log: 09:59:04,793 INFO [KarafBoot] Checking to see if org.pentaho.clean.karaf.cache is enabled 09:59:08,932 INFO [KarafInstance] ******************************************************************************* *** Karaf Instance Number: 1 at f:\us...
PDI(spoon.bat) crashes on startup. Here's /logs/spoon.log:
09:59:04,793 INFO [KarafBoot] Checking to see if org.pentaho.clean.karaf.cache is enabled
09:59:08,932 INFO [KarafInstance]
*******************************************************************************
*** Karaf Instance Number: 1 at f:\user\pdi-ce-8.0.0.0-28\data-integration\ ***
*** .\system\karaf\caches\spoon\data-1 ***
*** FastBin Provider Port:52901 ***
*** Karaf Port:8802 ***
*** OSGI Service Port:9051 ***
*******************************************************************************
?ub 20, 2018 9:59:09 AM org.apache.karaf.main.Main$KarafLockCallback lockAquired
INFO: Lock acquired. Setting startlevel to 100
2018-02-20 09:59:14.928:INFO:oejs.Server:jetty-8.1.15.v20140411
2018-02-20 09:59:15.055:INFO:oejs.AbstractConnector:Started NIOSocketConnectorWrapper@0.0.0.0:9051
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/blueprint/core
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/configuration/beans
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/configuration/parameterized-types
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/configuration/security
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://schemas.xmlsoap.org/wsdl/
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://www.w3.org/2005/08/addressing
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://schemas.xmlsoap.org/ws/2004/08/addressing
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-management (171) [org.apache.cxf.management.InstrumentationManager]
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-wsdl (174) [org.apache.cxf.wsdl.WSDLManager]
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-bindings-xml (176) [org.apache.cxf.binding.xml.XMLBindingFactory, org.apache.cxf.binding.xml.wsdl11.XMLWSDLExtensionLoader]
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-bindings-soap (177) [org.apache.cxf.binding.soap.SoapBindingFactory, org.apache.cxf.binding.soap.SoapTransportFactory]
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/blueprint/bindings/soap
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-ws-policy (195) [org.apache.cxf.ws.policy.PolicyEngine, org.apache.cxf.policy.PolicyDataEngine, org.apache.cxf.ws.policy.AssertionBuilderRegistry, org.apache.cxf.ws.policy.PolicyInterceptorProviderRegistry, org.apache.cxf.ws.policy.PolicyBuilder, org.apache.cxf.ws.policy.PolicyAnnotationListener, org.apache.cxf.ws.policy.attachment.ServiceModelPolicyProvider, org.apache.cxf.ws.policy.attachment.external.DomainExpressionBuilderRegistry, org.apache.cxf.ws.policy.attachment.external.EndpointReferenceDomainExpressionBuilder, org.apache.cxf.ws.policy.attachment.external.URIDomainExpressionBuilder, org.apache.cxf.ws.policy.attachment.wsdl11.Wsdl11AttachmentPolicyProvider, org.apache.cxf.ws.policy.mtom.MTOMAssertionBuilder, org.apache.cxf.ws.policy.mtom.MTOMPolicyInterceptorProvider]
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-transports-http (178) [org.apache.cxf.transport.http.HTTPTransportFactory, org.apache.cxf.transport.http.HTTPWSDLExtensionLoader, org.apache.cxf.transport.http.policy.HTTPClientAssertionBuilder, org.apache.cxf.transport.http.policy.HTTPServerAssertionBuilder, org.apache.cxf.transport.http.policy.NoOpPolicyInterceptorProvider]
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/transports/http/configuration
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/blueprint/simple
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-frontend-jaxws (180) [org.apache.cxf.jaxws.context.WebServiceContextResourceResolver]
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/blueprint/jaxws
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/blueprint/jaxrs
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/blueprint/jaxrs-client
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/binding/coloc
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-transports-local (191) [org.apache.cxf.transport.local.LocalTransportFactory]
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-bindings-object (192) [org.apache.cxf.binding.object.ObjectBindingFactory]
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/blueprint/binding/object
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/policy
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://www.w3.org/ns/ws-policy
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://www.w3.org/2006/07/ws-policy
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://schemas.xmlsoap.org/ws/2004/09/policy
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://www.w3.org/2000/09/xmldsig#
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-ws-addr (212) [org.apache.cxf.ws.addressing.policy.AddressingAssertionBuilder, org.apache.cxf.ws.addressing.policy.UsingAddressingAssertionBuilder, org.apache.cxf.ws.addressing.policy.AddressingPolicyInterceptorProvider, org.apache.cxf.ws.addressing.impl.AddressingWSDLExtensionLoader, org.apache.cxf.ws.addressing.WSAddressingFeature$WSAddressingFeatureApplier, org.apache.cxf.ws.addressing.MAPAggregator$MAPAggregatorLoader]
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/ws/addressing
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-ws-security (214) [org.apache.cxf.ws.security.policy.WSSecurityPolicyLoader, org.apache.cxf.ws.security.cache.CacheCleanupListener]
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-ws-rm (216) [org.apache.cxf.ws.rm.RMManager, org.apache.cxf.ws.rm.policy.RMPolicyInterceptorProvider, org.apache.cxf.ws.rm.policy.RM10AssertionBuilder, org.apache.cxf.ws.rm.policy.RM12AssertionBuilder, org.apache.cxf.ws.rm.policy.WSRMP12PolicyLoader, org.apache.cxf.ws.rm.policy.MC11PolicyLoader, org.apache.cxf.ws.rm.policy.RSPPolicyLoader]
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://cxf.apache.org/ws/rm/manager
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.blueprint.NamespaceHandlerRegisterer register
INFO: Registered blueprint namespace handler for http://schemas.xmlsoap.org/ws/2005/02/rm/policy
?ub 20, 2018 9:59:16 AM org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle org.apache.cxf.cxf-rt-javascript (217) [org.apache.cxf.javascript.JavascriptServerListener]
?ub 20, 2018 9:59:17 AM org.pentaho.caching.impl.PentahoCacheManagerFactory$RegistrationHandler$1 onSuccess
INFO: New Caching Service registered
09:59:17,972 ERROR [KarafCapabilityProvider] No feature found matching id: common-ui-client-config
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/F:/user/pdi-ce-8.0.0.0-28/data-integration/launcher/../lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/f:/user/pdi-ce-8.0.0.0-28/data-integration/plugins/pentaho-big-data-plugin/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
?ub 20, 2018 9:59:20 AM org.apache.cxf.endpoint.ServerImpl initDestination
INFO: Setting the server's publish address to be /lineage
?ub 20, 2018 9:59:20 AM org.apache.cxf.endpoint.ServerImpl initDestination
INFO: Setting the server's publish address to be /i18n
?ub 20, 2018 9:59:21 AM org.apache.cxf.endpoint.ServerImpl initDestination
INFO: Setting the server's publish address to be /marketplace
?ub 20, 2018 9:59:22 AM org.apache.cxf.endpoint.ServerImpl initDestination
INFO: Setting the server's publish address to be /repositories
?ub 20, 2018 9:59:24 AM org.apache.cxf.endpoint.ServerImpl initDestination
INFO: Setting the server's publish address to be /browser
09:59:30,154 ERROR [XulParser] handler not found: W?NDOW
2018/02/20 09:59:30 - General - ERROR (version 8.0.0.0-28, build 8.0.0.0-28 from 2017-11-05 07.27.50 by buildguy) : Error initializing transformation
2018/02/20 09:59:30 - General - ERROR (version 8.0.0.0-28, build 8.0.0.0-28 from 2017-11-05 07.27.50 by buildguy) : org.pentaho.ui.xul.XulException: java.lang.NullPointerException
2018/02/20 09:59:30 - General - at org.pentaho.ui.xul.impl.AbstractXulLoader.loadXul(AbstractXulLoader.java:134)
2018/02/20 09:59:30 - General - at org.pentaho.ui.xul.swt.SwtXulLoader.loadXul(SwtXulLoader.java:128)
2018/02/20 09:59:30 - General - at org.pentaho.ui.xul.swt.SwtXulLoader.loadXul(SwtXulLoader.java:122)
2018/02/20 09:59:30 - General - at org.pentaho.ui.xul.impl.AbstractXulLoader.loadXul(AbstractXulLoader.java:238)
2018/02/20 09:59:30 - General - at org.pentaho.di.ui.spoon.Spoon.init(Spoon.java:850)
2018/02/20 09:59:30 - General - at org.pentaho.di.ui.spoon.Spoon.createContents(Spoon.java:9209)
2018/02/20 09:59:30 - General - at org.eclipse.jface.window.Window.create(Window.java:426)
2018/02/20 09:59:30 - General - at org.eclipse.jface.window.Window.open(Window.java:785)
2018/02/20 09:59:30 - General - at org.pentaho.di.ui.spoon.Spoon.start(Spoon.java:9240)a
2018/02/20 09:59:30 - General - at org.pentaho.di.ui.spoon.Spoon.main(Spoon.java:692)
2018/02/20 09:59:30 - General - at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2018/02/20 09:59:30 - General - at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2018/02/20 09:59:30 - General - at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2018/02/20 09:59:30 - General - at java.lang.reflect.Method.invoke(Method.java:498)
2018/02/20 09:59:30 - General - at org.pentaho.commons.launcher.Launcher.main(Launcher.java:92)
2018/02/20 09:59:30 - General - Caused by: java.lang.NullPointerException
2018/02/20 09:59:30 - General - at org.pentaho.ui.xul.impl.XulParser.parseDocument(XulParser.java:92)
2018/02/20 09:59:30 - General - at org.pentaho.ui.xul.impl.AbstractXulLoader.loadXul(AbstractXulLoader.java:126)
2018/02/20 09:59:30 - General - ... 14 more
2018/02/20 09:59:30 - General - ERROR (version 8.0.0.0-28, build 8.0.0.0-28 from 2017-11-05 07.27.50 by buildguy) : Error starting Spoon shell
2018/02/20 09:59:30 - General - ERROR (version 8.0.0.0-28, build 8.0.0.0-28 from 2017-11-05 07.27.50 by buildguy) : java.lang.NullPointerException
2018/02/20 09:59:30 - General - at org.pentaho.di.ui.spoon.Spoon.init(Spoon.java:903)
2018/02/20 09:59:30 - General - at org.pentaho.di.ui.spoon.Spoon.createContents(Spoon.java:9209)
2018/02/20 09:59:30 - General - at org.eclipse.jface.window.Window.create(Window.java:426)
2018/02/20 09:59:30 - General - at org.eclipse.jface.window.Window.open(Window.java:785)
2018/02/20 09:59:30 - General - at org.pentaho.di.ui.spoon.Spoon.start(Spoon.java:9240)
2018/02/20 09:59:30 - General - at org.pentaho.di.ui.spoon.Spoon.main(Spoon.java:692)
2018/02/20 09:59:30 - General - at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2018/02/20 09:59:30 - General - at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2018/02/20 09:59:30 - General - at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2018/02/20 09:59:30 - General - at java.lang.reflect.Method.invoke(Method.java:498)
2018/02/20 09:59:30 - General - at org.pentaho.commons.launcher.Launcher.main(Launcher.java:92)
stopping
How can I resolve this?
Nae
(99 rep)
Feb 20, 2018, 07:07 AM
• Last activity: Oct 21, 2020, 08:23 AM
1
votes
0
answers
867
views
Insert/Update table on Pentaho when the table has a TYPE defined by user
I'm trying to make a remote connection from computer A to computer B where they have the same database but computer B doesn't have data in the table `PERSONA_Y_ESTADOS`, the goal is to get 3 rows from the table that is in computer A (which is populated) to table B which is empty. However it's giving...
I'm trying to make a remote connection from computer A to computer B where they have the same database but computer B doesn't have data in the table
PERSONA_Y_ESTADOS
, the goal is to get 3 rows from the table that is in computer A (which is populated) to table B which is empty. However it's giving me these errors.
2020/08/02 18:37:10 - Spoon - Running transformation using the Kettle execution engine
2020/08/02 18:37:10 - Spoon - Transformation opened.
2020/08/02 18:37:10 - Spoon - Launching transformation [pupu]...
2020/08/02 18:37:10 - Spoon - Started the transformation execution.
2020/08/02 18:37:10 - Insert / update.0 - ERROR (version 9.0.0.0-423, build 9.0.0.0-423 from 2020-01-31 04.53.04 by buildguy) : Error in step, asking everyone to stop because of:
2020/08/02 18:37:10 - Insert / update.0 - ERROR (version 9.0.0.0-423, build 9.0.0.0-423 from 2020-01-31 04.53.04 by buildguy) : org.pentaho.di.core.exception.KettleDatabaseException:
2020/08/02 18:37:10 - Insert / update.0 - Error looking up row in database
2020/08/02 18:37:10 - Insert / update.0 - ORA-00904: "FECHAS_INICIO_FIN.FECHA_FIN": invalid identifier
2020/08/02 18:37:10 - Insert / update.0 -
2020/08/02 18:37:10 - Insert / update.0 - at org.pentaho.di.core.database.Database.getLookup(Database.java:3108)
2020/08/02 18:37:10 - Insert / update.0 - at org.pentaho.di.core.database.Database.getLookup(Database.java:3087)
2020/08/02 18:37:10 - Insert / update.0 - at org.pentaho.di.core.database.Database.getLookup(Database.java:3083)
2020/08/02 18:37:10 - Insert / update.0 - at org.pentaho.di.trans.steps.insertupdate.InsertUpdate.lookupValues(InsertUpdate.java:89)
2020/08/02 18:37:10 - Insert / update.0 - at org.pentaho.di.trans.steps.insertupdate.InsertUpdate.processRow(InsertUpdate.java:299)
2020/08/02 18:37:10 - Insert / update.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2020/08/02 18:37:10 - Insert / update.0 - at java.lang.Thread.run(Thread.java:748)
2020/08/02 18:37:10 - Insert / update.0 - Caused by: java.sql.SQLSyntaxErrorException: ORA-00904: "FECHAS_INICIO_FIN.FECHA_FIN": invalid identifier
2020/08/02 18:37:10 - Insert / update.0 - at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
2020/08/02 18:37:10 - Insert / update.0 - at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
2020/08/02 18:37:10 - Insert / update.0 - at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
2020/08/02 18:37:10 - Insert / update.0 - at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
2020/08/02 18:37:10 - Insert / update.0 - at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
2020/08/02 18:37:10 - Insert / update.0 - at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
2020/08/02 18:37:10 - Insert / update.0 - at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:208)
2020/08/02 18:37:10 - Insert / update.0 - at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:886)
2020/08/02 18:37:10 - Insert / update.0 - at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
2020/08/02 18:37:10 - Insert / update.0 - at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
2020/08/02 18:37:10 - Insert / update.0 - at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3613)
2020/08/02 18:37:10 - Insert / update.0 - at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3657)
2020/08/02 18:37:10 - Insert / update.0 - at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1495)
2020/08/02 18:37:10 - Insert / update.0 - at org.pentaho.di.core.database.Database.getLookup(Database.java:3093)
2020/08/02 18:37:10 - Insert / update.0 - ... 6 more
2020/08/02 18:37:10 - pupu - Transformation detected one or more steps with errors.
2020/08/02 18:37:10 - pupu - Transformation is killing the other steps!
2020/08/02 18:37:10 - pupu - ERROR (version 9.0.0.0-423, build 9.0.0.0-423 from 2020-01-31 04.53.04 by buildguy) : Errors detected!
2020/08/02 18:37:10 - Spoon - The transformation has finished!!
2020/08/02 18:37:10 - pupu - ERROR (version 9.0.0.0-423, build 9.0.0.0-423 from 2020-01-31 04.53.04 by buildguy) : Errors detected!
2020/08/02 18:37:10 - pupu - ERROR (version 9.0.0.0-423, build 9.0.0.0-423 from 2020-01-31 04.53.04 by buildguy) : Errors detected!
I assume this is due to the TYPE I have created for that table, however I don't know how to solve the problem without removing the TYPE (I really don't want to do that).
Here's the script for the creation of my table:
CREATE OR REPLACE TYPE Fechas_inicio_fin AS OBJECT(
fecha_inicio date,
fecha_fin date
);
/
CREATE TABLE Persona_y_Estado (
id number primary key,
fechas_inicio_fin Fechas_inicio_fin,
fk_persona number,
fk_estado number
);
/
insert into Persona_y_estado (fechas_inicio_fin, fk_persona, fk_estado) values (fechas_inicio_fin(TO_DATE('6/20/2020', 'mm/dd/yyyy'), TO_DATE('1/1/2050', 'mm/dd/yyyy')), 1, 1);
insert into Persona_y_estado (fechas_inicio_fin, fk_persona, fk_estado) values (fechas_inicio_fin(TO_DATE('6/20/2020', 'mm/dd/yyyy'), TO_DATE('1/1/2050', 'mm/dd/yyyy')), 2, 1);
insert into Persona_y_estado (fechas_inicio_fin, fk_persona, fk_estado) values (fechas_inicio_fin(TO_DATE('6/20/2020', 'mm/dd/yyyy'), TO_DATE('1/1/2050', 'mm/dd/yyyy')), 3, 1);
FlowMafia
(111 rep)
Aug 2, 2020, 10:47 PM
2
votes
1
answers
4000
views
Pentaho Data Integration: 'Execute SQL statement' to create trigger
I'm trying to create a `mysql` trigger via an `Execute SQL Statement` step in my transformation. Below are the commands I'm attempting to run. DROP TRIGGER IF EXISTS standing_actions_swap_before_insert_row_tr; DELIMITER $$ CREATE DEFINER=`user`@`%` TRIGGER standing_actions_swap_before_insert_row_tr...
I'm trying to create a
mysql
trigger via an Execute SQL Statement
step in my transformation. Below are the commands I'm attempting to run.
DROP TRIGGER IF EXISTS standing_actions_swap_before_insert_row_tr;
DELIMITER $$
CREATE DEFINER=user
@%
TRIGGER standing_actions_swap_before_insert_row_tr BEFORE INSERT ON standing_actions_swap
FOR EACH ROW
BEGIN
SET NEW.id = CONCAT(NEW.acad_career, NEW.status, NEW.action);
END
$$
DELIMITER ;
Unfortunately, for whatever reason, while the following works in various other db software suites (e.g. Sequel Pro, Aqua Data Studio), I always get an error in Pentaho Data Integration. I've tried omitting delimiters and checking and unchecking the Execute as a Single Statement
box - unfortunately, it still no worky.
Any help is appreciated. Thanks!
---
Edit:
Here's is the stack trace:
2015/05/15 16:13:13 - Recreate ac.student_actions_swap before_insert trigger.0 - ERROR (version 5.3.0.0-213, build 1 from 2015-02-02_12-17-08 by buildguy) : An error occurred, processing will be stopped:
2015/05/15 16:13:13 - Recreate ac.student_actions_swap before_insert trigger.0 - Couldn't execute SQL: DELIMITER $$
2015/05/15 16:13:13 - Recreate ac.student_actions_swap before_insert trigger.0 - CREATE DEFINER=user
@%
TRIGGER standing_actions_swap_before_insert_row_tr BEFORE INSERT ON standing_actions_swap
2015/05/15 16:13:13 - Recreate ac.student_actions_swap before_insert trigger.0 - FOR EACH ROW
2015/05/15 16:13:13 - Recreate ac.student_actions_swap before_insert trigger.0 - BEGIN
2015/05/15 16:13:13 - Recreate ac.student_actions_swap before_insert trigger.0 - SET NEW.id = CONCAT(NEW.acad_career, NEW.status, NEW.action)
2015/05/15 16:13:13 - Recreate ac.student_actions_swap before_insert trigger.0 -
2015/05/15 16:13:13 - Recreate ac.student_actions_swap before_insert trigger.0 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DELIMITER $$
2015/05/15 16:13:13 - Recreate ac.student_actions_swap before_insert trigger.0 - CREATE DEFINER=user
@%
TRIGGER standing_actions_swap_before_inse' at line 1
2015/05/15 16:13:13 - Recreate ac.student_actions_swap before_insert trigger.0 - ERROR (version 5.3.0.0-213, build 1 from 2015-02-02_12-17-08 by buildguy) : Error initializing step [Recreate ac.student_actions_swap before_insert trigger]
2015/05/15 16:13:13 - recreate_standing_actions_swap_trigger - ERROR (version 5.3.0.0-213, build 1 from 2015-02-02_12-17-08 by buildguy) : Step [Recreate ac.student_actions_swap before_insert trigger.0] failed to initialize!
2015/05/15 16:13:13 - Recreate ac.student_actions_swap before_insert trigger.0 - Finished reading query, closing connection.
2015/05/15 16:13:13 - Spoon - ERROR (version 5.3.0.0-213, build 1 from 2015-02-02_12-17-08 by buildguy) : recreate_standing_actions_swap_trigger: preparing transformation execution failed
2015/05/15 16:13:13 - Spoon - ERROR (version 5.3.0.0-213, build 1 from 2015-02-02_12-17-08 by buildguy) : org.pentaho.di.core.exception.KettleException:
2015/05/15 16:13:13 - Spoon - We failed to initialize at least one step. Execution can not begin!
2015/05/15 16:13:13 - Spoon -
2015/05/15 16:13:13 - Spoon -
2015/05/15 16:13:13 - Spoon - at org.pentaho.di.trans.Trans.prepareExecution(Trans.java:1149)
2015/05/15 16:13:13 - Spoon - at org.pentaho.di.ui.spoon.trans.TransGraph$27.run(TransGraph.java:3989)
2015/05/15 16:13:13 - Spoon - at java.lang.Thread.run(Thread.java:695)
2015/05/15 16:13:13 - recreate_standing_actions_swap_trigger - ERROR (version 5.3.0.0-213, build 1 from 2015-02-02_12-17-08 by buildguy) : Errors detected!
2015/05/15 16:13:13 - recreate_standing_actions_swap_trigger - ERROR (version 5.3.0.0-213, build 1 from 2015-02-02_12-17-08 by buildguy) : Errors detected!
Kurt Mueller
(201 rep)
May 15, 2015, 06:38 PM
• Last activity: Aug 2, 2020, 07:07 AM
1
votes
0
answers
19
views
Date getting truncated on PDI 7.1 for salesforce DB - Need workaround
We are working on transformation which inserts date into a salesforce DB and are seeing issues. Below are the issues: A transformation working in PDI CE 6.1 do not work in CE 7.1 or CE 8.3. Error : >2019/10/31 15:46:32 - Salesforce Insert.0 - :Errors were found on item 0 Error code is: INVALID_TYPE_...
We are working on transformation which inserts date into a salesforce DB and are seeing issues.
Below are the issues: A transformation working in PDI CE 6.1 do not work in CE 7.1 or CE 8.3.
Error :
>2019/10/31 15:46:32 - Salesforce Insert.0 - :Errors were found on item 0 Error code is: INVALID_TYPE_ON_FIELD_IN_RECORD Error message: Job Start Time: value not of required type: 2019-10-31T15:46:29.211Z
# On 6.1:
The script was inserting a string variable into date field on SF DB and was working fine.
# For 7.1:
We changed string to date datatype using select values step and executed, the step now inserts data but we saw that the date is shown as one day prior and time as 8PM
# For 8.3:
We changed the date generation step to a javascript step, then changed the data type to date. here the insert works ok. Now if we run the transformation in 7.1 it again truncates the time field.
This seems to be the defect - https://jira.pentaho.com/browse/PDI-16480
We would like to know If anyone have encountered this issues, or Is there any workaround for this issue in 7.1 ? i.e. can any Jar file for salesforce be copied from 6.1 or 8.3 version to 7.1 to make this work?
User2019
(15 rep)
Nov 4, 2019, 05:06 AM
• Last activity: Nov 4, 2019, 04:44 PM
0
votes
1
answers
851
views
Pentaho Data Integration - Kettle- Update Identity Column in Microsoft SQL Server
For migration purposes I need to migrate existing ID's from my old database to my new table in MS SQL Server. I can truncate my table in MS SQL Server and then adjust my database connection in Pentaho to 'SET IDENTITY_INSERT tablename ON'. Then I am able to insert my ID's through a table output step...
For migration purposes I need to migrate existing ID's from my old database to my new table in MS SQL Server.
I can truncate my table in MS SQL Server and then adjust my database connection in Pentaho to 'SET IDENTITY_INSERT tablename ON'. Then I am able to insert my ID's through a table output step, but I want to be able to UPDATE my existing ID without truncating my table in MS SQL Server.
Could this be possible using Pentaho?
sql_mind
(17 rep)
Oct 28, 2019, 10:06 AM
• Last activity: Oct 28, 2019, 10:47 AM
1
votes
1
answers
3420
views
How can I prevent Pentaho DI/Spoon from deadlocking?
I have a transformation with four steps: 1. Read a pipe-delimited text (a text file input step) 2. Write to a staging table, with truncation (import table) 3. Run a stored procedure against the table populated in #2 4. Run a stored procedure against the table modified in #3 These have hops between t...
I have a transformation with four steps:
1. Read a pipe-delimited text (a text file input step)
2. Write to a staging table, with truncation (import table)
3. Run a stored procedure against the table populated in #2
4. Run a stored procedure against the table modified in #3
These have hops between them: 1 to 2 to 3 to 4.
It consistently hangs (tested several times, waiting up to 1.6 hours); that is, Pentaho reports starting the job in the Logging window, then nothing more. The GUI itself does not hang, in fact the Stop button disables after a moment, but the transformation does not proceed.
It appears to be deadlocking: checking for blocked sessions on the target database (PostgreSQL), I see two sessions, one for the stored procedure in step #3 (which shouldn't have started yet), which is blocked by a session which is trying to truncate the table in step #2. The
TRUNCATE
never completes (though Postgres does not report it as blocked), so no records get written and step #3 cannot run. If I kill either session, the transformation fails, with no records written; the sprocs in steps #3 or #4 sometimes run, but with no new data on which to operate.
If I disable the hop from 2 to 3, the job runs fine, and it takes just a couple of minutes.
Why is the sproc in step #3 trying to run when it has a dependency on step #2? Does a Pentaho "hop" not mean "run A, wait for it to end, and then run B"? What can I do to prevent the deadlock situation, or have I misread what's going on?
Jon of All Trades
(5987 rep)
Apr 4, 2017, 05:39 PM
• Last activity: Apr 5, 2017, 07:12 PM
3
votes
1
answers
1696
views
How SQL Server determines the data transfer rate
We are having some issues with the data transfer rate when extracting data from SQL Server. Our scenario is as follows: - SQL Server 2008, in a Windows Server 2003 with 32 GB RAM (located in Guatemala). - Windows Server 2012 R2, in a EC2 instance in AWS (Virginia USA), with 16 GB RAM. - For extracti...
We are having some issues with the data transfer rate when extracting data from SQL Server. Our scenario is as follows:
- SQL Server 2008, in a Windows Server 2003 with 32 GB RAM (located in Guatemala).
- Windows Server 2012 R2, in a EC2 instance in AWS (Virginia USA), with 16 GB RAM.
- For extraction we are using Pentaho Spoon, extracting ~13 tables at the same time.
The issue is because sometimes tables are loaded fast, but sometimes one or two tables are loading very slow. Looks like the data transfer sometimes is really bad for those tables, transferring at 50 rows/s, instead of 2,000 rowd/s (should be normal).
Almost all the times, SQL Server shows in the Activity monitor a Wait Type:
ASYNC_NETWORK_IO
, which I guess is due the network activity. Even when the data is loaded fast, that wait type is displayed.
How SQL Server determines the transfer rate for each table?
Is it based on indexes or network activity?
The application which is consuming the data, is processing in batches of 5000 rows. It is receiving the data and inserting into a stage database every 5000 rows, even inserting into a flat file. Therefore, I don't think the application is processing the data row by row.
Locally, the performance is better (it's reasonable) but there is always a difference among some tables. Some of them are loaded faster than others. It's weird. I'm not a DBA, so I think it could be something related to indexes? Or something about DB tuning.
georgeos
(131 rep)
Mar 8, 2017, 11:15 PM
• Last activity: Mar 15, 2017, 03:54 AM
3
votes
1
answers
559
views
PostgreSQL batch inserts fail silent randomly
I have been using [Kettle](http://community.pentaho.com/projects/data-integration/) to import data from a SQL Server to PostgreSQL on AWS RDS and when I was enabling all of my foreign key constraints I realized that data was missing, often in the even hundreds. But checking the log of the Kettle job...
I have been using [Kettle](http://community.pentaho.com/projects/data-integration/) to import data from a SQL Server to PostgreSQL on AWS RDS and when I was enabling all of my foreign key constraints I realized that data was missing, often in the even hundreds.
But checking the log of the Kettle job, it seems to be doing all read and writes correctly from a table of
1923
entries
2017/02/26 21:36:42 - Table output.5 - Finished processing (I=0, O=240, R=240, W=240, U=0, E=0)
2017/02/26 21:36:42 - Table output.3 - Finished processing (I=0, O=240, R=240, W=240, U=0, E=0)
2017/02/26 21:36:42 - Table output.7 - Finished processing (I=0, O=240, R=240, W=240, U=0, E=0)
2017/02/26 21:36:42 - Table output.6 - Finished processing (I=0, O=240, R=240, W=240, U=0, E=0)
2017/02/26 21:36:42 - Table output.1 - Finished processing (I=0, O=241, R=241, W=241, U=0, E=0)
2017/02/26 21:36:42 - Table output.2 - Finished processing (I=0, O=241, R=241, W=241, U=0, E=0)
2017/02/26 21:36:42 - Table output.4 - Finished processing (I=0, O=240, R=240, W=240, U=0, E=0)
2017/02/26 21:36:42 - Table output.0 - Finished processing (I=0, O=241, R=241, W=241, U=0, E=0)
But when running the query count on Postgres I get 1423
entries, and similar behavior on maybe 10% of all tables I have transferred.
As I don't have any prior experience in neither Kettle nor Postgres, what is it that can cause this issue?
Erik Karlsson
(141 rep)
Feb 27, 2017, 06:51 AM
• Last activity: Feb 28, 2017, 01:05 PM
0
votes
2
answers
1565
views
Pentaho Kettle - Unable to connect to virtual Oracle Database
I'm attempting to connect to a virtualized Oracle database but I am receiving the following error: Error connecting to database: (using class oracle.jdbc.driver.OracleDriver) Listener refused the connection with the following error: > ORA-12505, TNS:listener does not currently know of SID given in >...
I'm attempting to connect to a virtualized Oracle database but I am receiving the following error:
Error connecting to database: (using class oracle.jdbc.driver.OracleDriver)
Listener refused the connection with the following error:
> ORA-12505, TNS:listener does not currently know of SID given in
> connect descriptor The Connection descriptor used by the client was:
> foohost:1521:foodba
Since SIDs are not usable with virtualized databases I need to provide a fully qualified URL (from what I have heard), however I'm not certain how to configure Pentaho to accomplish this.
Stephen Brewster
Oct 18, 2015, 06:47 PM
• Last activity: Jan 31, 2017, 08:41 AM
3
votes
1
answers
348
views
BI - Cube Concept and MDX Query Help
In a [Mondrian](http://community.pentaho.com/projects/mondrian/) Schema, I have a fact table for Sales, a dimension table Time and a dimension table Client. I need to create a [Saiku](http://www.meteorite.bi/products/saiku) report where i can show how many clients joined per year. I have a the Time...
In a [Mondrian](http://community.pentaho.com/projects/mondrian/) Schema, I have a fact table for Sales, a dimension table Time and a dimension table Client.
I need to create a [Saiku](http://www.meteorite.bi/products/saiku) report where i can show how many clients joined per year. I have a the Time dimension and a join_date in Client (which DO NOT link to the Time Dimension key, is just a date field),.
If I create a cube adding Join_date as a Level of the Client Table, I wouldn't know how to filter that latter in the Saiku.
Does anyone know how is the best way to add JOIN_DATE on client? I just need a report: Clients that have joined to the company by year.
And how should `
work in cubes? When i tested using this
`, it was not visible in Saiku...
lucassig
(41 rep)
Jan 20, 2014, 07:18 PM
• Last activity: Jan 28, 2016, 10:11 PM
0
votes
1
answers
2779
views
Extract data from Oracle DBF files
I'm completely new to Oracle and I was provided with what appears to be an Oracle database backup (170 DBF files and 3 CTL files). I'd like to extract the information using SSIS or Pentaho's Kettle but I'm not sure if that's possible Can you please let me know if that's possible, and if so, what dat...
I'm completely new to Oracle and I was provided with what appears to be an Oracle database backup (170 DBF files and 3 CTL files).
I'd like to extract the information using SSIS or Pentaho's Kettle but I'm not sure if that's possible
Can you please let me know if that's possible, and if so, what data provider should I use?
I don't have Oracle installed, so I guess it'll be hard and expensive to restore the DBFs
Thanks in advance
Daniel Martinez
(117 rep)
Nov 17, 2015, 11:51 PM
• Last activity: Nov 18, 2015, 10:43 PM
2
votes
2
answers
1460
views
How to perform ETL from RDBMS to Neo4j
I have a mature 50+ tables web application based on mySQL. In order to do some advanced data mining I want to use Neo4j and the goodnes of cypher. However I'm having a hard time migrating my data from RDBMS to Neo4j. I don't wan't to do it by hand, because this is generally unmaintainable. I use pen...
I have a mature 50+ tables web application based on mySQL. In order to do some advanced data mining I want to use Neo4j and the goodnes of cypher. However I'm having a hard time migrating my data from RDBMS to Neo4j. I don't wan't to do it by hand, because this is generally unmaintainable.
I use pentaho which supports the Neo4j JDBC, however I'm having a hard time wrapping my head around loading the extracted data to Neo4j since the GUI is clearly designed for RDBMS and there are no sources about how to do it. Had anyone run into a similar problem and found a solution?
JohnnyM
(121 rep)
May 12, 2013, 11:01 PM
• Last activity: Oct 12, 2015, 05:39 AM
0
votes
2
answers
512
views
Daily snapshot fact table
I have a list of products per costumer. Costumers can delete or add products to their list. I want to know in each day what is the list of products (and count) by costumer. My first approach is to have a fact table with all products by client each day. Is this the best approach? This fact table will...
I have a list of products per costumer. Costumers can delete or add products to their list. I want to know in each day what is the list of products (and count) by costumer.
My first approach is to have a fact table with all products by client each day.
Is this the best approach?
This fact table will grow at the same rate that the products are added to costumers list.
btw i'm using Pentaho.
Ivan Pereira
(101 rep)
Nov 19, 2013, 12:30 PM
• Last activity: Sep 22, 2015, 10:26 AM
0
votes
1
answers
21664
views
How to convert timestamp to string in Pentaho Kettle?
I have a timestamp field in my input table like this: `yyyy-mm-dd hh24:mm:dd`, I want output sting like this: `yyyymmdd`.
I have a timestamp field in my input table like this:
yyyy-mm-dd hh24:mm:dd
, I want output sting like this: yyyymmdd
.
Lilantha Lakmal
(53 rep)
Jun 26, 2015, 08:53 AM
• Last activity: Jun 26, 2015, 01:38 PM
1
votes
0
answers
860
views
Pentaho Kettle (Data Intergration) MongoDB Aggregation
I'm using kettle v5.2 which support the aggregation pipleline in MongoDB when using MongoDB input the query works for small data set but I need to use option allowDiskUse to the query can't figure how to add this in pentaho while I tested this option in mongo shell and it's working as expected http:...
I'm using kettle v5.2 which support the aggregation pipleline in MongoDB
when using MongoDB input the query works for small data set but I need to use option
allowDiskUse to the query can't figure how to add this in pentaho
while I tested this option in mongo shell and it's working as expected
http://docs.mongodb.org/manual/reference/method/db.collection.aggregate/
http://wiki.pentaho.com/display/EAI/MongoDB+Input#MongoDBInput-queryaggpipeline
y0ft88
(11 rep)
Dec 4, 2014, 05:08 PM
0
votes
0
answers
3349
views
Communications link error with MySQL
I have a process which runs via Pentaho on a Unix server against MySQL 5.6.19. and using 5.1.32 of the MySQL jdbc driver. We actually found this driver helped with this error in some cases compared to 5.1.26 The process is an extremely long running heavy load analytical process. Last night I had a f...
I have a process which runs via Pentaho on a Unix server against MySQL 5.6.19. and using 5.1.32 of the MySQL jdbc driver. We actually found this driver helped with this error in some cases compared to 5.1.26
The process is an extremely long running heavy load analytical process. Last night I had a failure, and I can see from the system stats that there was a large amount of "iowait" on the system which started ~15mins before this failure.
The actual error is pasted below. I've lived with this error over the years and usually it is simply due to the net_write_timeout. So that has already been set at 86,400 or 24 hours. For good measure i set net_read_timeout to the same. I also had a situation where after 8 hours we hit the "wait_timeout" with exactly the same error so that is now also at 86,400.
So I don't really understand the error. Last packet sent was 15 seconds ago, last packet received was 142ms ago. so this cannot be a timeout? It must be something else? Yet googling around for this error every time people say "its the timeout". I've also set tcpKeepAlive=true for good measure.
The MySQL and Pentaho process are both working on the same local box. No errors in /var/log/messages or in the MySQL error log.
It's hard to know if this should be posted here, or on the sysadmin site, or elsewhere.
Here's the full error:
org.pentaho.di.core.exception.KettleDatabaseException:
Couldn't get row from result set
Communications link failure
The last packet successfully received from the server was 142 milliseconds ago. The last packet sent successfully to the server was 14,468,141 milliseconds ago.
at org.pentaho.di.core.database.Database.getRow(Database.java:2420)
at org.pentaho.di.core.database.Database.getRow(Database.java:2389)
at org.pentaho.di.trans.steps.tableinput.TableInput.processRow(TableInput.java:145)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 142 milliseconds ago. The last packet sent successfully to the server was 14,468,141 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:408)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1137)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3697)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3586)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4131)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:925)
at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:2033)
at com.mysql.jdbc.RowDataDynamic.nextRecord(RowDataDynamic.java:408)
at com.mysql.jdbc.RowDataDynamic.next(RowDataDynamic.java:387)
at com.mysql.jdbc.ResultSetImpl.next(ResultSetImpl.java:7008)
at org.pentaho.di.core.database.Database.getRow(Database.java:2408)
... 4 more
Caused by: java.io.EOFException: Can not read response from server. Expected to read 45 bytes, read 4 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3143)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3653)
... 12 more
Codek
(220 rep)
Sep 12, 2014, 06:52 AM
Showing page 1 of 20 total questions