Feed aggregator

Updating parameter files from REST

DBASolved - Tue, 2019-12-10 10:12

One of the most important and time consuming things to do with Oracle GoldenGate is to build parameter files for the GoldenGate processes.  In the past, this required you to access GGSCI and run commands like: GGSCI> edit params <process group> After which, you then had to bounce the process group for the changes to […]

The post Updating parameter files from REST appeared first on DBASolved.

Categories: DBA Blogs

How to optimize a campaign to get the most out of mobile advertising

VitalSoftTech - Tue, 2019-12-10 09:54

  When marketing for a campaign, we must optimize it in the best way possible to get the most out of it. Otherwise, it is just advertising revenue going to waste. Same goes for mobile advertising. We are here to discuss the best mobile ad strategies. However, before we start, here is a question for […]

The post How to optimize a campaign to get the most out of mobile advertising appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle Challenger Series Returns to Southern California in 2020 with Newport Beach, Indian Wells Events

Oracle Press Releases - Tue, 2019-12-10 09:00
Press Release
Oracle Challenger Series Returns to Southern California in 2020 with Newport Beach, Indian Wells Events

Indian Wells, Calif.—Dec 10, 2019

The Oracle Challenger Series today announced its return to Southern California for two events in early 2020. The third stop of the 2019-2020 series takes place at the Newport Beach Tennis Club on January 27 – February 2. The Indian Wells Tennis Garden hosts the final tournament on March 2-8.

Now in its third year, the Oracle Challenger Series helps up-and-coming American players secure both ranking points and prize money in the United States. The two American men and two American women who accumulate the most points over the course of the Challenger Series receive wild cards into the singles main draws at the BNP Paribas Open in Indian Wells. As part of the Oracle Challenger Series’ mission to grow the sport and make professional tennis events more accessible, each tournament is free and open to the public.

The Newport Beach and Indian Wells events will conclude the 2019-2020 Road to Indian Wells and are instrumental in determining which American players receive wild card berths at the 2020 BNP Paribas Open. At the halfway point of the Challenger Series, Houston champion Marcos Giron holds the top spot for the men. Usue Arconada is in first place for the women following an impressive showing in New Haven with finals appearances in both singles and doubles. Trailing just behind them are Tommy Paul, the men’s champion in New Haven, and CoCo Vandeweghe, the women’s runner-up in Houston.

The Newport Beach event has propelled its champions to career-defining seasons over the previous two years. Americans Taylor Fritz and Danielle Collins began their steady climb up the world rankings by capturing the titles at the 2018 inaugural event. Bianca Andreescu’s 2019 title marked the beginning of her meteoric rise to WTA stardom. Likewise, the Indian Wells event has featured some of the Challenger Series’ strongest player fields and produced champions Martin Klizan, Sara Errani, Kyle Edmund and Viktorija Golubic.

The Newport Beach tournament will also feature the Oracle Champions Cup which takes place on Saturday, February 1. Former World No. 1 and 2003 US Open Champion Andy Roddick; 10-time ATP Tour titlist and former World No. 4 James Blake; 2004 Olympic silver medalist and 6-time ATP Tour singles winner Mardy Fish; and 2005 US Open semifinalist Robby Ginepri headline the one-night tournament. The event consists of two one-set semifinals with the winners meeting in a one-set championship match.

Tickets to the Oracle Champions Cup go on-sale to the general public on Tuesday, December 17. Special VIP packages including play with the pros, special back-stage access and an exclusive player party are also available.

For more information about the Oracle Challenger Series visit oraclechallengerseries.com, and be sure to follow @OracleChallngrs on Twitter and @OracleChallengers on Instagram. To inquire about volunteer opportunities, including becoming a ball kid, please email oraclechallengerseries@desertchampions.com.

Contact Info
Mindi Bach
Oracle
mindi.bach@oracle.com
About the Oracle Challenger Series

The Oracle Challenger Series was established to help up-and-coming American tennis players secure both ranking points and prize money. The Oracle Challenger Series is the next chapter in Oracle’s ongoing commitment to support U.S. tennis for men and women at both the collegiate and professional level. The Challenger Series features equal prize money in a groundbreaking tournament format that combines the ATP Challenger Tour and WTA 125K Series.

The Oracle Challenger Series offers an unmatched potential prize of wild cards into the main draw of the BNP Paribas Open, widely considered the top combined ATP Tour and WTA professional tennis tournament in the world, for the top two American male and female finishers.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

The Global Oracle APEX Community Delivers. Again.

Joel Kallman - Mon, 2019-12-09 16:59

Oracle was recently recognized as a November 2019 Gartner Peer Insights Customers’ Choice for Enterprise Low-Code Application Platform Market for Oracle APEX.  You can read more about that here.

I personally regard this a distinction for the global Oracle APEX community.  We asked for your assistance by participating in these reviews, and you delivered.  Any time we've asked for help or feedback, the Oracle APEX community has selflessly and promptly responded.  You have always been very gracious with your time and energy.

I was telling someone recently how I feel the Oracle APEX community is unique within all of Oracle, but I also find it to be unique within the industry.  It is the proverbial two-way partnership that many talk about but rarely live through their actions.  We remain deeply committed to our customers' personal and professional success - it is a mindset which permeates our team.  We are successful only when our customers and partners are successful.

Thank you to all who participated in the Gartner Peer Insights reviews - customers, partners who nudged their customers, and enthusiasts.  You, as a community, stand out amongst all others.  We are grateful for you.

Oracle Names Vishal Sikka to the Board of Directors

Oracle Press Releases - Mon, 2019-12-09 15:15
Press Release
Oracle Names Vishal Sikka to the Board of Directors

Redwood Shores, Calif.—Dec 9, 2019

Oracle (NYSE: ORCL) today announced that Dr. Vishal Sikka, founder and CEO of the AI company Vianai Systems, has been named to Oracle’s Board of Directors.  Before starting Vianai, Vishal was a top executive at SAP and the CEO of Infosys.

“The digital transformation of an enterprise is enabled by the rapid adoption of modern cloud applications and technologies,” said Oracle CEO Safra Catz. “Vishal clearly understands how Oracle’s Gen2 Cloud Infrastructure, Autonomous Database and Applications come together in the Oracle Cloud to help our customers drive business value and adapt to change. I am very happy that he will be joining the Oracle Board.”

“For years, the Oracle Database has been the heartbeat and life-blood of every large and significant organization in the world,” said Dr. Vishal Sikka. “Today, Oracle is the only one of the big four cloud companies that offers both Enterprise Application Suites and Secure Infrastructure technologies in a single unified cloud. Oracle’s unique position in both applications and infrastructure paves the way for enormous innovation and growth in the times ahead. I am excited to have the opportunity to join the Oracle Board, and be part of this journey.”

“Vishal is one the world’s leading experts in Artificial Intelligence and Machine Learning,” said Oracle Chairman and CTO Larry Ellison. “These AI technologies are key foundational elements of the Oracle Cloud’s Autonomous Infrastructure and Intelligent Applications. Vishal’s expertise and experience makes him ideally suited to provide strategic vision and expert advice to our company and to our customers. He is a most welcome addition to the Oracle Board.”

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com/investor or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor Statement

Statements in this press release relating to Oracle’s future plans, expectations, beliefs, intentions and prospects are “forward-looking statements” and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (“SEC”) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading “Risk Factors.” Copies of these filings are available online from the SEC, by contacting Oracle Corporation’s Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of December 9, 2019. Oracle undertakes no duty to update any statement in light of new information or future events.

Upcoming Webinar: Is Your Sensitive Data Playing Hide and Seek with You?

Is Your Sensitive Data Playing Hide and Seek with You?

Thursday, December 12, 2019 - 2:00 pm EST

Your Oracle databases and ERP applications may contain sensitive personal data like Social Security numbers, credit card numbers, addresses, date of births, and salary information. Understanding in what tables and columns sensitive data resides is critical in protecting the data and ensure compliance with regulations like GDPR, PCI, and the new California Consumer Privacy Act (CCPA). However, sensitive data is like a weed and can spread quickly if not properly managed. The challenge is how to effectively and continuously find sensitive data, especially in extremely large databases and data warehouses.  This educational webinar will discuss methodologies and tools to find sensitive such as by searching column names, crawling the database table by table, and performing data qualification to eliminate false positives.  Other locations where sensitive data might reside such as trace files, dynamic views (e.g., V$SQL_BIND_DATA), and materialized views will be reviewed.

>>> Register for this webinar <<<

Oracle Database, Webinar
Categories: APPS Blogs, Security Blogs

Oracle Health Sciences Participates in TOP Tech Sprint

Oracle Press Releases - Mon, 2019-12-09 07:00
Press Release
Oracle Health Sciences Participates in TOP Tech Sprint Could enable the use of open data and AI to match cancer patients with clinical trials and experimental therapies

Redwood Shores, Calif.—Dec 9, 2019

Clinical trials are an essential gateway for getting new cures to market. However, many patients struggle to find the right trials that meet their unique medical requirements. To explore better ways to match patients with the right trials, Oracle Health Sciences is once again participating in The Opportunity Project (TOP) Technology Sprint: Creating the Future of Health.

This year’s entry joins Oracle technology with de-identified precision oncology open data sets from the United States Department of Veterans Affairs and the National Cancer Institute. The demo will highlight how Artificial Intelligence (AI) and customer experience solutions could be used to connect cancer patients with available clinical trials and experimental therapies.

“It is paramount that we collaborate with our peers within the federal government and technology communities to collectively evaluate what innovative opportunities exist and to explore the potential applications AI and machine learning can offer to fight deadly diseases such as cancer,” said Steve Rosenberg, senior vice president and general manager, Oracle Health Sciences. “The opportunity to participate in the TOP challenge lets us apply Oracle solutions in new ways while also harnessing the learnings to benefit the lives of patients who need treatment in the future.”

Connecting Patients with Critical Trials

This year Oracle’s entry builds on the last technology sprint by leveraging open datasets to explore more deeply the applications of machine learning (ML) and AI. In addition, it demonstrates how features for prospective trial recruitment will work with appropriate identity protection.

Oracle’s submission uses a combination of Oracle Healthcare Foundation, Oracle CX Service, Oracle Policy Automation, Oracle Digital Assistant and Oracle Labs PGX: Parallel Graph AnalytiX solutions to create a demonstration that in the future might enable connecting patients and clinical staff through intuitive interfaces that provide data at the point of care. A graphical interface would allow physicians to track a patient’s care journey and would indicate which clinical trial options are available. It applies AI to standardize data from clinical trial requirement forms to specify eligibility criteria. The result can be a more simplified and personalized experience to help determine the best treatment for patients. Patients can also keep their identifying information from being shared, while allowing only their de-identified clinical data to be made available so they can receive information about new programs, clinical studies or therapies that may be of value to their care.

TOP is a 12-week technology development sprint that brings together technology developers, communities, and government to solve real-world problems using open data. TOP will host its Demo Day 2019 on December 10, 2019 at the U.S. Census Bureau in Suitland, MD.

Contact Info
Judi Palmer
Oracle
+1 650.784.4119
judi.palmer@oracle.com
Rick Cohen
Blanc & Otus
+1 212.885.0563
rick.cohen@blancandotus.com
About Oracle Health Sciences

Oracle Health Sciences breaks down barriers and opens new pathways to unify people and processes to bring new drugs to market faster. As a leader in Life Sciences technology, Oracle Health Sciences is trusted by 30 of the top 30 pharma, 10 of the top 10 biotech and 10 of the top 10 CROs for managing clinical trials and pharmacovigilance around the globe.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1 650.784.4119

Rick Cohen

  • +1 212.885.0563

Teri Meri Prem Kahani Cover | Keyboard Performance | by Dharun at Improviser Music Studio

Senthil Rajendran - Mon, 2019-12-09 03:13

My Son Dharun Performing at Improviser Music Studio

Teri Meri Prem Kahani Cover

DarkSide Cover | Keyboard Performance | by Dharun at Improviser Music Studio

Senthil Rajendran - Mon, 2019-12-09 03:11
My Son Dharun Performing at Improviser Music Studio

DarkSide Cover

Oracle Database 19c Automatic Indexing: Index Compression (Ghosteen)

Richard Foote - Sun, 2019-12-08 21:01
    In my previous post on Automatic Indexing, I discussed how the default index column order (in absence of other factors) is column id, the order in which the columns are defined in the table. In this post, I’ll explore if this changes if index compression is also implemented. By default, Automatic Indexing does […]
Categories: DBA Blogs

Documentum – LDAP Config Object “certdb_location” has invalid value

Yann Neuhaus - Sun, 2019-12-08 02:00

In a previous blog, I talked about the automation of the LDAP/LDAPs creation. However, the first time that I applied these steps, I actually faced an issue and I couldn’t really get my head around it, at first. This will be a rather short post but I still wanted to share my thoughts because it might avoid you some headache. The issue is only linked to the SSL part of the setup so there is no problem with the basis non-secure LDAP communications.

So, after applying all the steps, everything went fine and I therefore tried to run the dm_LDAPSynchronization job to validate the setup. Doing so, the generated log file wasn’t so great:

[dmadmin@content-server-0 ~]$ cat $DOCUMENTUM/dba/log/repo01/sysadmin/LDAPSynchronizationDoc.txt
LDAPSynchronization Report For DocBase repo01 As Of 2019/09/22 12:45:57

2019-09-22 12:45:56:124 UTC [default task-79]: LDAP Synchronization Started @ Sun Sep 22 12:45:56 UTC 2019
2019-09-22 12:45:56:124 UTC [default task-79]:
2019-09-22 12:45:56:124 UTC [default task-79]: $JMS_HOME/server/DctmServer_MethodServer/deployments/ServerApps.ear/lib/dmldap.jar
2019-09-22 12:45:56:125 UTC [default task-79]: ---------------------------------------------------------------------------------
2019-09-22 12:45:56:125 UTC [default task-79]: Product-Name : Content Server-LDAPSync
2019-09-22 12:45:56:125 UTC [default task-79]: Product-Version : 16.4.0110.0167
2019-09-22 12:45:56:125 UTC [default task-79]: Implementation-Version : 16.4.0110.0167
2019-09-22 12:45:56:125 UTC [default task-79]: ---------------------------------------------------------------------------------
2019-09-22 12:45:56:125 UTC [default task-79]:
2019-09-22 12:45:56:126 UTC [default task-79]: Preparing LDAP Synchronization...
2019-09-22 12:45:57:101 UTC [default task-79]: INFO: Job Status: [LDAP Synchronization Started @ Sun Sep 22 12:45:56 UTC 2019]
2019-09-22 12:45:57:120 UTC [default task-79]: INFO: Job Status updated
2019-09-22 12:45:58:415 UTC [default task-79]: INFO: List of Ldap Configs chosen for Synchronization
2019-09-22 12:45:58:415 UTC [default task-79]: INFO:    >>>0812d6878000252c - Internal_LDAP<<<
2019-09-22 12:45:58:415 UTC [default task-79]: INFO:
2019-09-22 12:45:58:418 UTC [default task-79]:
2019-09-22 12:45:58:418 UTC [default task-79]: ==================================================================================
2019-09-22 12:45:58:420 UTC [default task-79]: Starting Sychronization for ldap config object >>>Internal_LDAP<<< ...
2019-09-22 12:45:58:425 UTC [default task-79]: Unexpected Error. Caused by: [DM_LDAP_SYNC_E_EXCEPTION_ERROR]error:  "Ldap Config Property "certdb_location" has invalid value "ldap_chain"."
2019-09-22 12:45:58:426 UTC [default task-79]: ERROR: DmLdapException:: THREAD: default task-79; MSG: [DM_LDAP_SYNC_E_EXCEPTION_ERROR]error:  "Ldap Config Property "certdb_location" has invalid value "ldap_chain"."; ERRORCODE: 100; NEXT: null
        at com.documentum.ldap.internal.sync.SynchronizationContextBuilder.getCertDbLocation(SynchronizationContextBuilder.java:859)
        at com.documentum.ldap.internal.sync.SynchronizationContextBuilder.setCertDbLocation(SynchronizationContextBuilder.java:225)
        at com.documentum.ldap.internal.sync.SynchronizationContextBuilder.buildSynchronizationContext(SynchronizationContextBuilder.java:49)
        at com.documentum.ldap.LDAPSync.prepareSync(LDAPSync.java:438)
        at com.documentum.ldap.LDAPSync.processJob(LDAPSync.java:238)
        at com.documentum.ldap.LDAPSync.execute(LDAPSync.java:80)
        at com.documentum.mthdservlet.DfMethodRunner.runIt(Unknown Source)
        at com.documentum.mthdservlet.AMethodRunner.runAndReturnStatus(Unknown Source)
        at com.documentum.mthdservlet.DoMethod.invokeMethod(Unknown Source)
        at com.documentum.mthdservlet.DoMethod.doPost(Unknown Source)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:86)
        at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
        at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
        at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
        at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
        at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
        at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:58)
        at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:72)
        at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
        at io.undertow.security.handlers.SecurityInitialHandler.handleRequest(SecurityInitialHandler.java:76)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:282)
        at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:261)
        at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:80)
        at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:172)
        at io.undertow.server.Connectors.executeRootHandler(Connectors.java:199)
        at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:774)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

2019-09-22 12:45:58:426 UTC [default task-79]: WARNING:   **** Skipping Ldap Config Object - Internal_LDAP ****
2019-09-22 12:45:58:775 UTC [default task-79]: Synchronization of ldap config object >>>Internal_LDAP<<< is finished
2019-09-22 12:45:58:775 UTC [default task-79]: ==================================================================================
2019-09-22 12:45:58:775 UTC [default task-79]:
2019-09-22 12:45:58:786 UTC [default task-79]: INFO: Job Status: [dm_LDAPSynchronization Tool had ERRORS at 2019/09/22 12:45:58. Total duration was 2 seconds.View the job's report for details.]
2019-09-22 12:45:58:800 UTC [default task-79]: INFO: Job Status updated
2019-09-22 12:45:58:800 UTC [default task-79]: LDAP Synchronization Ended @ Sun Sep 22 12:45:58 UTC 2019
2019-09-22 12:45:58:800 UTC [default task-79]: Session s2 released successfully
Report End  2019/09/22 12:45:58
[dmadmin@content-server-0 ~]$

 

After a bunch of checks inside the repository, everything seemed fine. All the objects had the correct content, the correct references, aso… However there was one thing that wasn’t exactly as per the KB6321243 and that was the extension of the Trust Chain file. If you look at the basis of SSL Certificate encodings, then there are two main possibilities: DER (binary = not readable) or PEM (ASCII = readable). In addition to that, you can also have files with CRT or CER extensions but they are always either DER or PEM encoded. The KB asks you to have a PEM encoded SSL Certificate so this file can technically have either a “.pem” or “.cer” or “.crt” extension, that’s almost synonymous. Therefore, here I was, thinking that I could keep my “.crt” extension for the PEM encoded SSL Trust Chain.

To validate that this was the issue, I switched my file to the “.pem” extension and updated the “dm_location” Object:

[dmadmin@content-server-0 ~]$ cd $DOCUMENTUM/dba/secure/ldapdb
[dmadmin@content-server-0 ldapdb]$ mv ldap_chain.crt ldap_chain.pem
[dmadmin@content-server-0 ldapdb]$ 
[dmadmin@content-server-0 ldapdb]$ iapi repo01 -U${USER} -Pxxx


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase repo01
[DM_SESSION_I_SESSION_START]info:  "Session 0112d68780001402 started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> retrieve,c,dm_location where object_name='ldap_chain'
...
3a12d68780002522
API> get,c,l,file_system_path
...
$DOCUMENTUM/dba/secure/ldapdb/ldap_chain.cer
API> set,c,l,file_system_path
SET> $DOCUMENTUM/dba/secure/ldapdb/ldap_chain.pem
...
OK
API> get,c,l,file_system_path
...
$DOCUMENTUM/dba/secure/ldapdb/ldap_chain.pem
API> save,c,l
...
OK
API> ?,c,UPDATE dm_job OBJECTS set run_now=true, set a_next_invocation=DATE(NOW) WHERE object_name='dm_LDAPSynchronization'
objects_updated
---------------
              1
(1 row affected)
[DM_QUERY_I_NUM_UPDATE]info:  "1 objects were affected by your UPDATE statement."

API> exit
Bye
[dmadmin@content-server-0 ldapdb]$

 

With the above, I just changed the extension of the file on the file system and its reference in the “dm_location” Object. The last iAPI command triggered the dm_LDAPSynchronization job to run. Checking the new log file, the issue was indeed solved which confirmed that despite the fact that the Trust Chain was a PEM encoded certificate, it wasn’t enough. There is actually a hardcoded value/check inside Documentum, which forces you to use the “.pem” extension and nothing else:

[dmadmin@content-server-0 ldapdb]$ cat $DOCUMENTUM/dba/log/repo01/sysadmin/LDAPSynchronizationDoc.txt
LDAPSynchronization Report For DocBase repo01 As Of 2019/09/22 13:19:33

2019-09-22 13:19:30:360 UTC [default task-87]: LDAP Synchronization Started @ Sun Sep 22 13:19:30 UTC 2019
2019-09-22 13:19:30:360 UTC [default task-87]:
2019-09-22 13:19:30:361 UTC [default task-87]: $JMS_HOME/server/DctmServer_MethodServer/deployments/ServerApps.ear/lib/dmldap.jar
2019-09-22 13:19:30:367 UTC [default task-87]: ---------------------------------------------------------------------------------
2019-09-22 13:19:30:367 UTC [default task-87]: Product-Name : Content Server-LDAPSync
2019-09-22 13:19:30:367 UTC [default task-87]: Product-Version : 16.4.0110.0167
2019-09-22 13:19:30:367 UTC [default task-87]: Implementation-Version : 16.4.0110.0167
2019-09-22 13:19:30:367 UTC [default task-87]: ---------------------------------------------------------------------------------
2019-09-22 13:19:30:367 UTC [default task-87]:
2019-09-22 13:19:30:370 UTC [default task-87]: Preparing LDAP Synchronization...
2019-09-22 13:19:32:425 UTC [default task-87]: INFO: Job Status: [LDAP Synchronization Started @ Sun Sep 22 13:19:30 UTC 2019]
2019-09-22 13:19:32:453 UTC [default task-87]: INFO: Job Status updated
2019-09-22 13:19:34:292 UTC [default task-87]: INFO: List of Ldap Configs chosen for Synchronization
2019-09-22 13:19:34:292 UTC [default task-87]: INFO:    >>>0812d6878000252c - Internal_LDAP<<<
2019-09-22 13:19:34:292 UTC [default task-87]: INFO:
2019-09-22 13:19:34:294 UTC [default task-87]:
2019-09-22 13:19:34:294 UTC [default task-87]: ==================================================================================
2019-09-22 13:19:34:297 UTC [default task-87]: Starting Sychronization for ldap config object >>>Internal_LDAP<<< ...
2019-09-22 13:19:35:512 UTC [default task-87]: INFO: Directory Type: Sun ONE Directory Server ...
2019-09-22 13:19:35:517 UTC [default task-87]: INFO: Ldap Connection: SSL connection
2019-09-22 13:19:35:517 UTC [default task-87]: INFO: ldap://ldap.domain.com:636
2019-09-22 13:19:35:517 UTC [default task-87]: INFO: {java.naming.provider.url=ldap://ldap.domain.com:636, java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory}
2019-09-22 13:19:35:597 UTC [Thread-91752]: INFO: DM_LDAP_IGNORE_HOSTNAME_CHECK environment variable is enabled.
2019-09-22 13:19:35:598 UTC [Thread-91752]: INFO: Skipping hostname check
2019-09-22 13:19:35:598 UTC [Thread-91752]: INFO: DctmTrustMangaer.checkServerTrusted(): Successfully validated the certificate chain sent from server.
2019-09-22 13:19:35:635 UTC [default task-87]: INFO: DM_LDAP_IGNORE_HOSTNAME_CHECK environment variable is enabled.
2019-09-22 13:19:35:635 UTC [default task-87]: INFO: Skipping hostname check
2019-09-22 13:19:35:635 UTC [default task-87]: INFO: DctmTrustMangaer.checkServerTrusted(): Successfully validated the certificate chain sent from server.
2019-09-22 13:19:35:663 UTC [default task-87]: INFO: LDAP Search Retry: is_child_context = true
2019-09-22 13:19:35:663 UTC [default task-87]: INFO: LDAP Search Retry: Retry count = 1
2019-09-22 13:19:35:665 UTC [default task-87]: Starting the group synchronization...
...
2019-09-22 13:19:35:683 UTC [default task-87]: Group synchronization finished.
2019-09-22 13:19:35:683 UTC [default task-87]:
2019-09-22 13:19:35:683 UTC [default task-87]: INFO: Updating Last Run Time: [20190922131935Z]
2019-09-22 13:19:35:683 UTC [default task-87]: INFO: Updating Last Change No: [null]
2019-09-22 13:19:35:749 UTC [default task-87]: INFO: Ldap Config Object >>>>Internal_LDAP<<<< updated
2019-09-22 13:19:35:751 UTC [default task-87]: Disconnected from LDAP Server successfully.
2019-09-22 13:19:36:250 UTC [default task-87]: Synchronization of ldap config object >>>Internal_LDAP<<< is finished
2019-09-22 13:19:36:250 UTC [default task-87]: ==================================================================================
2019-09-22 13:19:36:250 UTC [default task-87]:
2019-09-22 13:19:36:265 UTC [default task-87]: INFO: Job Status: [dm_LDAPSynchronization Tool Completed with WARNINGS at 2019/09/22 13:19:36. Total duration was 6 seconds.]
2019-09-22 13:19:36:278 UTC [default task-87]: INFO: Job Status updated
2019-09-22 13:19:36:278 UTC [default task-87]: LDAP Synchronization Ended @ Sun Sep 22 13:19:36 UTC 2019
2019-09-22 13:19:36:278 UTC [default task-87]: Session s2 released successfully
Report End  2019/09/22 13:19:36
[dmadmin@content-server-0 ldapdb]$

 

A pretty annoying design but there is nothing you can do about it. Fortunately, it’s not hard to fix the issue once you know what’s the problem!

 

Cet article Documentum – LDAP Config Object “certdb_location” has invalid value est apparu en premier sur Blog dbi services.

Documentum – Automatic/Silent creation of LDAP/LDAPs Server Config Objects

Yann Neuhaus - Sun, 2019-12-08 02:00

If you have been working with Documentum, then you probably already created/configured an LDAP/LDAPs Server Config Object (or several) so that your users can be globally managed in your organization. There are several compatible LDAP Servers so I will just take one (Sun One/Netscpae/iPlanet Directory Server). To create this LDAP/LDAPs Server Config Object, you probably used Documentum Administrator because it’s simple and quick to setup, however that’s not enough for automation. In this blog, I will show and explain the steps needed to configure the same but without any need for DA.

The problem with DA is that it usually does some magic and you cannot always do exactly the same without it. Here, this also applies but to a smaller extent since it is only the SSL part (LDAPs) that needs specific steps. For this, there is a KB created by EMC some years ago (migrated to OpenText): KB6321243.

Before starting, let’s setup some parameters that will be used in this blog:

[dmadmin@content-server-0 ~]$ repo="repo01"
[dmadmin@content-server-0 ~]$ dm_location_name="ldap_chain"
[dmadmin@content-server-0 ~]$ file_path="$DOCUMENTUM/dba/secure/ldapdb/${dm_location_name}.pem"
[dmadmin@content-server-0 ~]$ ldap_server_name="Internal_LDAP"
[dmadmin@content-server-0 ~]$ ldap_host="ldap.domain.com"
[dmadmin@content-server-0 ~]$ ldap_ssl=1 #0 for LDAP, 1 for LDAPs
[dmadmin@content-server-0 ~]$ ldap_port=636
[dmadmin@content-server-0 ~]$ location=`if ((${ldap_ssl} == 1)); then echo ${dm_location_name}; else echo "ldapcertdb_loc"; fi`
[dmadmin@content-server-0 ~]$ ldap_principal="ou=APP,ou=applications,ou=intranet,dc=dbi services,dc=com"
[dmadmin@content-server-0 ~]$ ldap_pwd="T3stP4ssw0rd"
[dmadmin@content-server-0 ~]$ ldap_user_filter="objectclass=person"
[dmadmin@content-server-0 ~]$ ldap_user_class="person"
[dmadmin@content-server-0 ~]$ ldap_group_filter="objectclass=groupofuniquenames"
[dmadmin@content-server-0 ~]$ ldap_group_class="groupofuniquenames"

 

1. Preparation steps for LDAPs

The steps in this section are only needed in case you need to configure SSL communications between your LDAP Server and Documentum. It can be done upfront without any issues. So let’s start with setting up the environment. Without the use of DA, the only way you have to import/trust SSL Certificate for the LDAPs connection is by adding an environment variable named “DM_LDAP_CERT_FILE” and setting it to “1”. This will allow Documentum to use certificate files for the trust chain instead of doing what DA is doing (the magic part) that we cannot replicate.

It is a little bit out of scope for this blog but a second variable is often needed “DM_LDAP_IGNORE_HOSTNAME_CHECK” which drives the validation of the hostname. Setting this to “1” will disable the hostname check and therefore allow you to use an LDAP Server that is behind a Proxy or a Load Balancer. This would also be needed with a LDAP (non-secure).

[dmadmin@content-server-0 ~]$ grep DM_LDAP ~/.bash_profile
[dmadmin@content-server-0 ~]$ echo $DM_LDAP_CERT_FILE -- $DM_LDAP_IGNORE_HOSTNAME_CHECK
--
[dmadmin@content-server-0 ~]$
[dmadmin@content-server-0 ~]$ echo "export DM_LDAP_CERT_FILE=1" >> ~/.bash_profile
[dmadmin@content-server-0 ~]$ echo "export DM_LDAP_IGNORE_HOSTNAME_CHECK=1" >> ~/.bash_profile
[dmadmin@content-server-0 ~]$
[dmadmin@content-server-0 ~]$ grep DM_LDAP ~/.bash_profile
export DM_LDAP_CERT_FILE=1
export DM_LDAP_IGNORE_HOSTNAME_CHECK=1
[dmadmin@content-server-0 ~]$
[dmadmin@content-server-0 ~]$ source ~/.bash_profile
[dmadmin@content-server-0 ~]$ echo $DM_LDAP_CERT_FILE -- $DM_LDAP_IGNORE_HOSTNAME_CHECK
1 -- 1
[dmadmin@content-server-0 ~]$

 

For the variables to take effect, you will need to restart the Repositories. I usually set everything up (LDAPs specific pieces + LDAP steps) and only then restart the repositories so it’s done once at the very end of the setup.

The next step is then to create/prepare the Trust Chain. In DA, you can import the Trust Chain one certificate at a time, the Root first and then the Intermediate one. While using “DM_LDAP_CERT_FILE=1” (so without DA), you can unfortunately use only one file per LDAP and therefore this file will need to contain the full Trust Chain. To do that, simply put in a file the content of both Root and Intermediate SSL Certificates one after the other. So in the end, you file should contain something like that:

[dmadmin@content-server-0 ~]$ vi ${dm_location_name}.pem
[dmadmin@content-server-0 ~]$ cat ${dm_location_name}.pem
-----BEGIN CERTIFICATE-----
<<<content_of_root_ca>>>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<<<content_of_intermediate_ca>>>
-----END CERTIFICATE-----
[dmadmin@content-server-0 ~]$
[dmadmin@content-server-0 ~]$ mv ${dm_location_name}.pem ${file_path}
[dmadmin@content-server-0 ~]$

 

Once you have the file, you can put it wherever you want with the name that you want but it absolutely needs to be a “.pem” extension. You can check this blog, which explains what happens if this isn’t the case and what needs to be done to fix it. As you can see above, I choose to put the file where DA is putting them as well.

The last step for this SSL specific part is then to create a “dm_location” Object that will reference the file that has been created so that the LDAP Server Config Object can use it and trust the target LDAP Server. Contrary to the LDAP Certificate Database Management in DA, which is global to all Repositories (so it needs to be done only one), here you will need to create the “dm_location” Object in all the Repositories that are going to use the LDAP Server. This can be done very easily via iAPI:

[dmadmin@content-server-0 ~]$ iapi ${repo} -U${USER} -Pxxx << EOF
create,c,dm_location
set,c,l,object_name
${dm_location_name}
set,c,l,path_type
file
set,c,l,file_system_path
${file_path}
save,c,l
exit
EOF


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase repo01
[DM_SESSION_I_SESSION_START]info:  "Session 0112d6878000111b started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> ...
3a12d68780002522
API> SET> ...
OK
API> SET> ...
OK
API> SET> ...
OK
API> ...
OK
API> Bye
[dmadmin@content-server-0 ~]$

 

The name of the “dm_location” Object doesn’t have to be the same as the name of the Trust Chain file. I’m just using the same here so it’s simpler to see the relation between both. These are the only steps that are specific to SSL communications between your LDAP Server and Documentum.

 

2. Global steps for LDAP

This section applies to all cases. Whether you are trying to setup an LDAP or LDAPs Server, then you will need to create the “dm_ldap_config” Object and everything else described below. As mentioned previously, I’m using one type of LDAP Server for this example (value of “dm_ldap_config.a_application_type“). If you aren’t very familiar with the settings inside the Repository, then the simplest thing to do to find out which parameters you would need (and the associated values) is simply to create one LDAP Config Object using DA. Once done, just dump it and you can re-use that same configuration in the future.

So let’s start with creating a sample LDAP Server Config Object:

[dmadmin@content-server-0 ~]$ iapi ${repo} -U${USER} -Pxxx << EOF
create,c,dm_ldap_config
set,c,l,object_name
${ldap_server_name}
set,c,l,map_attr[0]
user_name
set,c,l,map_attr[1]
user_login_name
set,c,l,map_attr[2]
user_address
set,c,l,map_attr[3]
group_name
set,c,l,map_attr[4]
client_capability
set,c,l,map_attr[5]
user_xprivileges
set,c,l,map_attr[6]
default_folder
set,c,l,map_attr[7]
workflow_disabled
set,c,l,map_val[0]
uniqueDisplayName
set,c,l,map_val[1]
uid
set,c,l,map_val[2]
mail
set,c,l,map_val[3]
cn
set,c,l,map_val[4]
2
set,c,l,map_val[5]
32
set,c,l,map_val[6]
/Home/${uniqueDisplayName}
set,c,l,map_val[7]
false
set,c,l,map_attr_type[0]
dm_user
set,c,l,map_attr_type[1]
dm_user
set,c,l,map_attr_type[2]
dm_user
set,c,l,map_attr_type[3]
dm_group
set,c,l,map_attr_type[4]
dm_user
set,c,l,map_attr_type[5]
dm_user
set,c,l,map_attr_type[6]
dm_user
set,c,l,map_attr_type[7]
dm_user
set,c,l,map_val_type[0]
A
set,c,l,map_val_type[1]
A
set,c,l,map_val_type[2]
A
set,c,l,map_val_type[3]
A
set,c,l,map_val_type[4]
V
set,c,l,map_val_type[5]
V
set,c,l,map_val_type[6]
E
set,c,l,map_val_type[7]
V
set,c,l,ldap_host
${ldap_host}
set,c,l,port_number
${ldap_port}
set,c,l,person_obj_class
${ldap_user_class}
set,c,l,group_obj_class
${ldap_group_class}
set,c,l,per_search_base
${ldap_principal}
set,c,l,grp_search_base
${ldap_principal}
set,c,l,per_search_filter
${ldap_user_filter}
set,c,l,grp_search_filter
${ldap_group_filter}
set,c,l,bind_dn
${ldap_principal}
set,c,l,user_subtype
dm_user
set,c,l,deactivate_user_option
T
set,c,l,import_mode
groups
set,c,l,bind_type
bind_by_dn
set,c,l,ssl_mode
${ldap_ssl}
set,c,l,ssl_port
${ldap_port}
set,c,l,certdb_location
${location}
set,c,l,map_rejection[0]
2
set,c,l,map_rejection[1]
2
set,c,l,map_rejection[2]
0
set,c,l,map_rejection[3]
2
set,c,l,map_rejection[4]
0
set,c,l,map_rejection[5]
0
set,c,l,map_rejection[6]
2
set,c,l,map_rejection[7]
0
set,c,l,retry_count
3
set,c,l,retry_interval
3
set,c,l,failover_use_interval
300
set,c,l,r_is_public
F
set,c,l,a_application_type
netscape
set,c,l,a_full_text
T
save,c,l
exit
EOF


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase repo01
[DM_SESSION_I_SESSION_START]info:  "Session 0112d68780001123 started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> ...
0812d6878000252c
API> SET> ...
OK
...
...
...
[dmadmin@content-server-0 ~]$

 

Once the LDAP Server Config Object has been created, you can register it in the “dm_server_config” Objects. In our silent scripts, we have using the r_object_id of the object just created so that we are sure it is the correct value but below, for simplification, I’m using a select to retrieve the r_object_id based on the LDAP Object Name (so make sure it’s unique if you use the below):

[dmadmin@content-server-0 ~]$ iapi ${repo} -U${USER} -Pxxx << EOF
?,c,update dm_server_config object set ldap_config_id=(select r_object_id from dm_ldap_config where object_name='${ldap_server_name}')
exit
EOF


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase repo01
[DM_SESSION_I_SESSION_START]info:  "Session 0112d6878000112f started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> objects_updated
---------------
              1
(1 row affected)
[DM_QUERY_I_NUM_UPDATE]info:  "1 objects were affected by your UPDATE statement."

API> Bye
[dmadmin@content-server-0 ~]$

 

Then, it is time to encrypt the password of the LDAP Account used that is used for the “bind_dn” (${ldap_principal} above):

[dmadmin@content-server-0 ~]$ crypto_docbase=`grep ^dfc.crypto $DOCUMENTUM_SHARED/config/dfc.properties | tail -1 | sed 's,.*=[[:space:]]*,,'`
[dmadmin@content-server-0 ~]$ 
[dmadmin@content-server-0 ~]$ iapi ${crypto_docbase} -U${USER} -Pxxx << EOF
encrypttext,c,${ldap_pwd}
exit
EOF


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase gr_repo
[DM_SESSION_I_SESSION_START]info:  "Session 0112d68880001135 started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> ...
DM_ENCR_TEXT_V2=AAAAEHQfx8vF52wIC1Lg8KoxAflW/I7ZnbHwEDJCciKx/thqFZxAvIFNtpsBl6JSGmI4XKYCCuUl/NMY7BTsCa2GeIdUebL2LYfA/nJivzuikqOt::gr_repo
API> Bye
[dmadmin@content-server-0 ~]$

 

Finally, the only thing left is to create the file “$DOCUMENTUM/dba/config/${repo}/ldap_${dm_ldap_config_id}.cnt” and put in it the content of the encrypted password (the whole line “DM_ENCR_TEXT_V2=…::gr_repo“). As mentioned previously, after a small restart of the Repository, you should then be able to run your dm_LDAPSynchronization job. You might want to configure the job with some specific properties but that’s up to you.

With all the commands above, you have already a very good basis to automate the creation/setup of your LDAP/LDAPs Server without issue. In our automation, instead of printing the result of the iAPI commands to the console, we are usually putting that in a log file. With that, we can automatically retrieve the result of the previous commands and continue the execution based on the outcome so there is no need for any human interaction. In the scope of this blog, it was much more human friendly to display it directly.

Maybe one final note: the above steps are for a Primary Content Server. In case you are trying to do the same thing on a Remote Content Server (RCS/CFS), then some steps aren’t needed. For example, you will need to put the Trust Chain to the correct location but you won’t need to create the “dm_location” or “dm_ldap_config” Objects since they are inside the Repository and therefore already present.

 

Cet article Documentum – Automatic/Silent creation of LDAP/LDAPs Server Config Objects est apparu en premier sur Blog dbi services.

Installing Oracle 19c on Linux

Pete Finnigan - Sat, 2019-12-07 20:53
I needed to create a new 19c install yesterday for a test of some customer software and whilst I love Oracle products I have to say that installing the software and database has never been issue free and simple over....[Read More]

Posted by Pete On 06/12/19 At 04:27 PM

Categories: Security Blogs

Gather Stats while doing a CTAS

Tom Kyte - Fri, 2019-12-06 17:53
Can you please provide your opinion on the below point. This is what I have noticed. When we create a table using a CTAS, and then check the user_Tables, the last_analyzed and num_rows column is already populated with accurate data. If it is so, ...
Categories: DBA Blogs

How can application control to explicitly call OCIStmtPrepare2 rather than OCIStmtPrepare when using pro*C

Tom Kyte - Fri, 2019-12-06 17:53
Our application got an ORA-25412: transaction replay disabled by call to OCIStmtPrepare. Oracle Version: 12.2. The Oracle runs in RAC mode. After searched on the internet, we found below explanation: <i>This call(OCIStmtPrepare) does no...
Categories: DBA Blogs

'BEFORE CREATE ON SCHEMA' trigger apparently not firing before Create Table

Tom Kyte - Fri, 2019-12-06 17:53
In Oracle 8.1.7 instance set up with characterset US7ASCII <code> Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production With the Partitioning option JServer Release 8.1.7.4.0 - Production SQL> create table t1 (c1 varchar2...
Categories: DBA Blogs

Left Joining Four Tables without duplicates from right tables or Cartesian product!

Tom Kyte - Fri, 2019-12-06 17:53
I am running the query below to get data from 4 tables, but the problem that data is fetched as Cartesian product, in other words, items from tblEdu is being duplicated with items from tblTrain <code> SELECT tblpersonal.*, tbltrain.*, tbledu.*,...
Categories: DBA Blogs

Temp space

Jonathan Lewis - Fri, 2019-12-06 06:18

A question about hunting down the source of the error “ORA-01652 unable to extend temp segment by NNN in tablespace XXX” shows up on the Oracle-L mailing list or the Oracle developer community forum from time to time. In most cases the tablespace referenced is the temporary tablespace, which means the session reporting the error was probably trying to allocate some space for sorting, or doing a hash join, or instantiating a GTT (global temporary table) or a CTE (common table expression / “with” subquery). The difficulty in cases like this is that the session reporting the error might be the victim of some other session’s greed – so looking at what the session was doing won’t necessarily point you to the real problem.

Of course you then run into a further problem tracking down the source of the problem. By the time you hear the complaint (even if it’s only seconds after the error appeared) the session that had been hogging the temporary tablespace may have finished what it was doing, leaving a huge amount of free space in the temporary tablespace and suggesting (to the less experienced and cynical user) that there’s something fundamentally wrong with the way Oracle has been accounting for space usage.

If you find yourself in this situation remember that (if you’re licensed to take advantage of it) the active session history may be able to help.  One of the columns in v$active_session_history is called temp_space_allocated with the slightly misleading description: “Amount of TEMP memory (in bytes) consumed by this session at the time this sample was taken”. A simple query against v$active_session_history may be enough to identify the session and SQL  statement that had been holding the temporary space when the error was raised, for example:


column pga_allocated        format 999,999,999,999
column temp_space_allocated format 999,999,999,999

break on session_id skip 1 on session_serial#

select
        session_id, session_serial#, 
        sample_id, 
        sql_id, 
        pga_allocated,
        temp_space_allocated
from
        v$active_session_history
where
        sample_time between sysdate - 5/1440 and sysdate
and     nvl(temp_space_allocated,0) != 0
order by
        session_id, sample_id
/

All I’ve done for this example is query v$active_session_history for the last 5 minutes reporting a minimum of information from any rows that show temp space usage. As a minor variation on the theme you can obviously change the time range, and you might want to limit the output to rows reporting more than 1MB (say) of temp space usage.

You’ll notice that I’ve also reported the pga_allocated (Description: Amount of PGA memory (in bytes) consumed by this session at the time this sample was taken) in this query; this is just a little convenience – a query that’s taking a lot of temp space will probably start by acquiring a lot of memory so it’s nice to be able to see the two figures together.

There are plenty of limitations and flaws in the usefulness of this report and I’ll say something about that after showing an example of usage. Let’s start with a script to build some data before running a space-consuming query:


rem
rem     Script:         allocate_tempspace.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1 as 
select * from all_objects
;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

execute dbms_stats.gather_table_stats(null,'t1')

execute dbms_lock.sleep(20)

set pagesize  60
set linesize 255
set trimspool on
set serveroutput off
alter session set statistics_level = all;

with ttemp as (
        select /*+ materialize */ * from t1
)
select 
        /*+ no_partial_join(@sel$2 t1b) no_place_group_by(@sel$2) */ 
        t1a.object_type,
        max(t1a.object_name)
from
        ttemp t1a, ttemp t1b
where
        t1a.object_id = t1b.object_id
group by
        t1a.object_type
order by
        t1a.object_type
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

My working table t1 consists of 16 copies of the view all_objects – so close to 1 million rows in my case – and the query is hinted to avoid any of the clever transformations that the optimizer could use to reduce the workload so it’s going to do a massive hash join and aggregation to report a summary of a couple of dozen rows. Here’s the execution plan (in this case from 12.2.0.1, though the plan is the same for 19.3 with some variations in the numbers).


SQL_ID  1cwabt12zq6zb, child number 0
-------------------------------------

Plan hash value: 1682228242

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |      1 |        |     29 |00:00:10.03 |   47413 |  21345 |  12127 |       |       |          |         |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |      1 |        |     29 |00:00:10.03 |   47413 |  21345 |  12127 |       |       |          |         |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D665B_E2772D3 |      1 |        |      0 |00:00:01.51 |   28915 |      0 |   9217 |  2068K|  2068K|          |         |
|   3 |    TABLE ACCESS FULL                     | T1                         |      1 |    989K|    989K|00:00:00.24 |   19551 |      0 |      0 |       |       |          |         |
|   4 |   SORT GROUP BY                          |                            |      1 |     29 |     29 |00:00:08.51 |   18493 |  21345 |   2910 |  6144 |  6144 | 6144  (0)|         |
|*  5 |    HASH JOIN                             |                            |      1 |     15M|     15M|00:00:03.93 |   18493 |  21345 |   2910 |    48M|  6400K|   65M (1)|   25600 |
|   6 |     VIEW                                 |                            |      1 |    989K|    989K|00:00:00.36 |    9233 |   9218 |      0 |       |       |          |         |
|   7 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D665B_E2772D3 |      1 |    989K|    989K|00:00:00.35 |    9233 |   9218 |      0 |       |       |          |         |
|   8 |     VIEW                                 |                            |      1 |    989K|    989K|00:00:00.40 |    9257 |   9217 |      0 |       |       |          |         |
|   9 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D665B_E2772D3 |      1 |    989K|    989K|00:00:00.39 |    9257 |   9217 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("T1A"."OBJECT_ID"="T1B"."OBJECT_ID")

Critically this plan shows us two uses of the temp space but only reports one of them as Used-Tmp. The “hash join” at operation 5 tells us that it reached 65MB of (tunable PGA) memory before going “1-pass”, eventually dumping 25,600 KB to disc. This space usage is corroborated by the 2,910 writes (which, at an 8KB block size, would be 23,280 KB). The missing Used-Tmp, however, is the space taken up by the materialized CTE. We can see that operation 2 is a “load as select” that writes 9,217 blocks to disc (subsequently read back twice – the tablescans shown in operations 7 and 9). That’s  74,000 KB of temp space that doesn’t get reported Used-Tmp.

If we take a look at the plan from 19.3 we see different numbers, but the same “error of omission”:

SQL_ID  1cwabt12zq6zb, child number 0
-------------------------------------

Plan hash value: 1682228242

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |      1 |        |     25 |00:00:08.15 |   34905 |  13843 |   8248 |       |       |          |         |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |      1 |        |     25 |00:00:08.15 |   34905 |  13843 |   8248 |       |       |          |         |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D6624_E259E68 |      1 |        |      0 |00:00:01.26 |   23706 |      0 |   5593 |  2070K|  2070K|          |         |
|   3 |    TABLE ACCESS FULL                     | T1                         |      1 |    907K|    907K|00:00:00.21 |   18024 |      0 |      0 |       |       |          |         |
|   4 |   SORT GROUP BY                          |                            |      1 |     25 |     25 |00:00:06.89 |   11193 |  13843 |   2655 |  6144 |  6144 | 6144  (0)|         |
|*  5 |    HASH JOIN                             |                            |      1 |     14M|     14M|00:00:03.55 |   11193 |  13843 |   2655 |    44M|  6400K|   64M (1)|      23M|
|   6 |     VIEW                                 |                            |      1 |    907K|    907K|00:00:00.26 |    5598 |   5594 |      0 |       |       |          |         |
|   7 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D6624_E259E68 |      1 |    907K|    907K|00:00:00.25 |    5598 |   5594 |      0 |       |       |          |         |
|   8 |     VIEW                                 |                            |      1 |    907K|    907K|00:00:00.34 |    5595 |   5594 |      0 |       |       |          |         |
|   9 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D6624_E259E68 |      1 |    907K|    907K|00:00:00.33 |    5595 |   5594 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("T1A"."OBJECT_ID"="T1B"."OBJECT_ID")


With slightly fewer rows in t1 (907K vs. 989K) we write 5,593 blocks for the materialized CTE  (instead of 9,217) and spill 2,655 blocks during the hash join (instead of 2,910). But again it’s only the hash join spill that is reported under Used-Tmp. Note, by the way, that the Used-Tmp in 12.2 was reported in KB when it’s reported in MB in 19.3.0.0.

Side note: comparing the number of rows created and blocks written for the CTE, it looks as if 19.3 is using the data blocks much more efficiently than 12.2. There’s no obvious reason for this (though a first guess would be that the older mechanism is to write a GTT with pctfree=10 while the new avoid any free space and transactional details) so, as ever, I now have another draft for a blog note reminding me to investigate (eventually) what differences there are in CTE storage on the upgrade. It’s something that might make a difference in a few special cases.

With the figures from the execution plans in mind we can now look at the results of the query against v$active_session_history. Conveniently the queries took a few seconds to complete, so we’re going to see several rows for each execution.

First the results from 12.2.0.1

SESSION_ID SESSION_SERIAL#  SAMPLE_ID SQL_ID           PGA_ALLOCATED TEMP_SPACE_ALLOCATED
---------- --------------- ---------- ------------- ---------------- --------------------
        14           22234   15306218 1cwabt12zq6zb       95,962,112            1,048,576
                             15306219 1cwabt12zq6zb       97,731,584           37,748,736
                             15306220 1cwabt12zq6zb      148,194,304           77,594,624
                             15306221 1cwabt12zq6zb      168,117,248           85,983,232
                             15306222 1cwabt12zq6zb      168,117,248           90,177,536
                             15306223 1cwabt12zq6zb      168,117,248           95,420,416
                             15306224 1cwabt12zq6zb      168,117,248           98,566,144
                             15306225 1cwabt12zq6zb      168,117,248          102,760,448
                             15306226 1cwabt12zq6zb      116,933,632          103,809,024
                             15306227 1cwabt12zq6zb      116,933,632          103,809,024
                             15306228 b66ycurnwpgud        8,602,624            1,048,576

I pointed out that we had 25,600 KB reported as Used-Tmp and roughly 74,000 KB unreported – a total of nearly 100,000 KB that is reasonably close to the 103,800,000 bytes reported by ASH. Moreover the timing of the plan (loading the CTE in the first 2 seconds) seems to agree with the growth to 77,590,000 of temp_space_allocated by the time we get to sample_id 15306220 in ASH. Then we have several seconds of slow growth as the hash join takes place and feeds its resulte up to the sort group by. At the end of the query we happen to have been lucky enough to catch one last sample just before the session had released all its temp space and ceased to be active.  (Note: however, that the sql_id at that sample point was not the sql_id of our big query – and that’s a clue about one of the limitations of using ASH to find the greedy SQL.)

We see the same pattern of behaviour in 19.3.0.0:


SESSION_ID SESSION_SERIAL#  SAMPLE_ID SQL_ID           PGA_ALLOCATED TEMP_SPACE_ALLOCATED
---------- --------------- ---------- ------------- ---------------- --------------------
       136           42767    2217500 1cwabt12zq6zb      143,982,592           46,137,344
                              2217501 1cwabt12zq6zb      193,527,808           54,525,952
                              2217502 1cwabt12zq6zb      193,527,808           57,671,680
                              2217503 1cwabt12zq6zb      193,527,808           61,865,984
                              2217504 1cwabt12zq6zb      197,722,112           67,108,864
                              2217505 1cwabt12zq6zb      150,601,728           70,254,592
                              2217506 1cwabt12zq6zb      150,601,728           70,254,592

We start with an almost instantaneous jump to 46MB of temp_space_allocated in the first second of the query – that’s the 5,593 blocks of the CTE being materialized, then the slow growth of temp space as the hash join runs, spills to disc, and passes its data up to the sort group by. Again we can see that the peak usage was the CTE (46MB) plus the reported spill of 23MB (plus rounding errors and odd bits).

Preliminary Observations

Queries against ASH (v$active_session_history) can show us sessions that were holding space in the temporary tablespace at the moment a sample of active sessions was taken. This may allow us to identify greedy sessions that were causing other sessions to fail with ORA-01652 (unable to allocate temp segment).

We have seen that there is at least one case where we get better information about temp space allocation from ASH than we do from the variants on v$sql_plan that include the SQL Workarea information (v$sql_workarea, v$sql_workarea_active) because the space acquired during materialization of CTEs is not reported as a “tunable SQL workarea” but does appear in the ASH temp_space_allocated.

At first sight it looks as if we may be able to use the query against ASH to identify the statement (by sql_id) that was the one being run by the greedy session when it consumed all the space. As we shall see in a further article, there are various reasons why this may over-optimistic, however in many cases there’s a fair chance that when you see the same sql_id appearing in a number of consecutive rows of the report then that statement may be the thing that is responsible for the recent growth in temp space usage – and you can query v$sql to find the text and call dbms_xplan.display_cursor() to get as much execution plan information as possible.

Further questions
  • When does a session release the temp_space_allocated? Will the space be held (locked) as long as the cursor is open, or can it be released when it is no longer needed? Will it be held, but releasable, even after the cursor has (from the client program’s perspective) been closed?
  • Could we be fooled by a report that said a session was holding a lot of space when it didn’t need it and would have released it if the demand had appeared?
  • Under what conditions might the temp_space_allocated in an ASH sample have nothing to do with the sql_id reported in the same sample?
  • Are there any other reasons why ASH might report temp_space_allocated when an execution plan doesn’t?
  • Is temp_space_allocated only about the temporary tablespace, or could it include informatiom about other (“permanent”) tablespaces ?

Stay tuned for further analysis of the limitations of using v$active_session_history.temp_space_allocated to help identify the srouce of a space management ORA-01652 issue.

 

 

Machine Learning and Spatial for FREE in the Oracle Database

Rittman Mead Consulting - Fri, 2019-12-06 04:34
Machine Learning and Spatial for FREE in the Oracle Database

Last week at UKOUG Techfest19 I spoke a lot about Machine Learning both with Oracle Analytics Cloud and more in depth in the Database with Oracle Machine Learning together with Charlie Berger, Oracle Senior Director of Product Management.

Machine Learning and Spatial for FREE in the Oracle Database

As mentioned several times in my previous blog posts, Oracle Analytics Cloud provides a set of tools helping Data Analysts start their path to Data Science. If, on the other hand, we're dealing with experienced Data Scientists and huge datasets, Oracle's proposal is to move Machine Learning where the data resides with Oracle Machine Learning. OML is an ecosystem of various options to perform ML with dedicated integration with Oracle Databases or Big Data appliances.

Machine Learning and Spatial for FREE in the Oracle Database

One of the most known branches is OML4SQL which provides the ability of doing proper data science directly in the database with PL/SQL calls! During the UKOUG TechFest19 talk Charlie Berger demoed it using a collaborative Notebook on top of an Autonomous Data Warehouse Cloud.

Machine Learning and Spatial for FREE in the Oracle Database

Both Oracle ADW and ATP include OML by default at no extra cost. This wasn't true for all the other database offerings in cloud or on-premises which required an additional option to be purchased (the Advanced Analytics one for on-premises deals). The separate license requirement was obviously something that limited the spread of this functionality, but, I'm happy to say that it's going away!

Oracle's blog post yesterday announced that:

As of December 5, 2019, the Machine Learning (formerly known as Advanced Analytics), Spatial and Graph features of Oracle Database may be used for development and deployment purposes with all on-prem editions and Oracle Cloud Database Services. See the Oracle Database Licensing Information Manual (pdf) for more details.

What this means is that both features are included for FREE within the Oracle Database License! Great news for both Machine Learning as well as Graph Databases fans! The following tweet from Dominic Giles (Master Product Manager for the Oracle DB) provides a nice summary of the licenses including the two options for the Oracle DB 19c.

The #Oracle Database now has some previously charged options added to the core functionality of both Enterprise Edition and Standard Edition 2. Details in the 19c licensing guide with more information to follow. pic.twitter.com/dqkRRQvWq2

— dominic_giles (@dominic_giles) December 5, 2019

But hey, this license change effects also older versions starting from the 12.2, the older one still in general support! So, no more excuses, perform Machine Learning where your data is: in the database with Oracle Machine Learning!

Categories: BI & Warehousing

Oracle Ranks First in all Four Use Cases for Oracle Database in Gartner’s Critical Capabilities for Operational Database Management Systems Report

Oracle Press Releases - Thu, 2019-12-05 07:00
Press Release
Oracle Ranks First in all Four Use Cases for Oracle Database in Gartner’s Critical Capabilities for Operational Database Management Systems Report Oracle also named a Leader in 2019 Gartner Magic Quadrant for Operational Database Management Systems, recognized in every report published since 2013

Redwood Shores, Calif.—Dec 5, 2019

Oracle today announced that it has been recognized in two newly released Gartner database reports. Oracle was ranked first in all four use cases of the 2019 Gartner “Critical Capabilities for Operational Database Management Systems” report1 and was named a Leader in Gartner’s 2019 “Magic Quadrant for Operational Database Management Systems” report2.

The self-driving Oracle Autonomous Database eliminates complexity, human error, and manual management to enable highest reliability, performance, and security at low cost.

“We believe Oracle’s placement in Gartner’s reports demonstrates our continued leadership in the database market and our commitment to innovation across our data management portfolio,” said Andrew Mendelsohn, Executive Vice President Database Server Technologies, Oracle. “Oracle continues to deliver unprecedented performance, reliability, security, and new cutting-edge technology via our cloud and on-premises offerings.”

Oracle believes it was positioned as a Leader in the Gartner Magic Quadrant for Operational Database Management Systems for its continued innovation across its database management portfolio. The Oracle Autonomous Database is available in the cloud and will be available for on-premises deployment soon through its Oracle Generation 2 Cloud at Customer offering. Oracle Database 19c includes all the latest database innovations, and is the long term support release for Oracle Database 12c Release 2. Oracle also recently shipped the Oracle Exadata Database Machine X8M, which employs Intel® Optane DC persistent memory and innovative database RDMA technologies to deliver up to 20x better latency than All Flash storage arrays.

For the Gartner Operational Database Management Systems Critical Capabilities report, Oracle Database once again ranked No. 1 in all four core operational database use cases: traditional transactions, distributed variable data, event processing/data in motion, and augmented transactions.

Oracle further demonstrates its commitment in continuing to deliver a converged database that makes it easy for developers to build multi-model, data-driven applications. The Oracle Database now includes several different sharding capabilities, enhancing automated data distribution especially important for hybrid cloud or hyperscale requirements.

Oracle Autonomous Database builds on 40 years of experience supporting the world’s most demanding applications. The first-of-its-kind, Oracle Autonomous Database uses groundbreaking machine learning to enable self-driving, self-repairing, and self-securing capabilities with cloud economies of scale and elasticity. The complete automation of database and infrastructure operations like patching, tuning and upgrading, cuts administrative costs, and allows developers, business analysts, and data scientists to focus on getting more value from data and building new innovations.

Download a complimentary copy of Gartner’s 2019 Critical Capabilities for Operational Database Management Systems here.

Download a complimentary copy of Gartner’s 2019 Magic Quadrant for Operational Database Management Systems here.

[1] Source: Gartner, Critical Capabilities for Operational Database Management Systems, Donald Feinberg, Merv Adrian, Nick Heudecker, 25 November 2019.
[2] Source: Gartner, Magic Quadrant for Operational Database Management Systems, Merv Adrian, Donald Feinberg, Nick Heudecker, 25 November 2019.

Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Victoria Brown
Oracle
+1.650.850.2009
victoria.brown@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle's products may change and remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Victoria Brown

  • +1.650.850.2009

Pages

Subscribe to Oracle FAQ aggregator