Feed aggregator

Getting started with Pivotal Telemetry Collector

Pas Apicella - Thu, 2019-10-17 18:44
Pivotal Telemetry Collector is an automated tool that collects data from a series of Pivotal Cloud Foundry (PCF) APIs found within a foundation and securely sends that data to Pivotal. The tool collects:

  • Configuration data from the Ops Manager API.
  • Optional certificate data from the CredHub API.
  • Optional app, task and service instance usage data from the Usage Service API.

Pivotal uses this information to do the following:

  • Improve its products and services.
  • Fix problems.
  • Advise customers on how best to deploy and use Pivotal products.
  • Provide better customer support.
Steps to Run

1. Download the scripts required to run "Pivotal Telemetry Collector" using this URL from Pivotal Network

https://network.pivotal.io/products/pivotal-telemetry-collector/

2. Extract to file system. You will notice 3 executables use the right one for your OS, in my case it was the Mac OSX executable "telemetry-collector-darwin-amd64"

-rwxr-xr-x   1 papicella  staff  14877449  5 Oct 00:42 telemetry-collector-linux-amd64*
-rwxr-xr-x   1 papicella  staff  14771312  5 Oct 00:42 telemetry-collector-darwin-amd64*
-rwxr-xr-x   1 papicella  staff  14447104  5 Oct 00:42 telemetry-collector-windows-amd64.exe*

3. Make sure you have network access to your PCF env. You will need to hit the Operations Manager URL as well as the CF CLI API and usage service API endpoints as shown below

Ops Manager endpoint

$ ping opsmgr-02.haas-yyy.pez.pivotal.io
PING opsmgr-02.haas-yyy.pez.pivotal.io (10.195.1.1): 56 data bytes
64 bytes from 10.195.1.1: icmp_seq=0 ttl=58 time=338.412 ms

CF API endpoint

$ ping api.system.run.haas-yyy.pez.pivotal.io
PING api.system.run.haas-yyy.pez.pivotal.io (10.195.1.2): 56 data bytes
64 bytes from 10.195.1.2: icmp_seq=0 ttl=58 time=380.852 ms

Usage Service API endpoint

$ ping app-usage.system.run.haas-yyy.pez.pivotal.io
PING app-usage.system.run.haas-yyy.pez.pivotal.io (10.195.1.3): 56 data bytes
64 bytes from 10.195.1.3: icmp_seq=0 ttl=58 time=495.996 ms

4. Now you can use this via two options. As you would of guessed we are using the CLI given we have downloaded the scripts.

Concourse: https://docs.pivotal.io/telemetry/1-1/using-concourse.html
CLI: https://docs.pivotal.io/telemetry/1-1/using-cli.html

5. So to run out first collect we would run the collector script as follows. More information about what the CLI options are can be found on this link or using help option "./telemetry-collector-darwin-amd64 --help"

https://docs.pivotal.io/telemetry/1-1/using-cli.html

Script Name: run-with-usage.sh

$ ./telemetry-collector-darwin-amd64 collect --url https://opsmgr-02.haas-yyy.pez.pivotal.io/ --username admin --password {PASSWD} --env-type production --output-dir output --usage-service-url https://app-usage.system.run.haas-yyy.pez.pivotal.io/ --usage-service-client-id push_usage_service --usage-service-client-secret {PUSH-USAGE-SERVICE-PASSWORD} --usage-service-insecure-skip-tls-verify --insecure-skip-tls-verify --cf-api-url https://api.system.run.haas-yyy.pez.pivotal.io

Note: You would obtain the PUSH-USAGE-SERVICE-PASSWORD from Ops Manager PAS tile credentials tab as shown in screen shot below


6. All set let's try it out

$ ./run-with-usage.sh
Collecting data from Operations Manager at https://opsmgr-02.haas-yyy.pez.pivotal.io/
Collecting data from Usage Service at https://app-usage.system.run.haas-yyy.pez.pivotal.io/
Wrote output to output/FoundationDetails_1571355194.tar
Success!

7. Let's extract the output TAR as follows

$ cd output/
$ tar -xvf FoundationDetails_1571355194.tar
x opsmanager/ops_manager_deployed_products
x opsmanager/pivotal-container-service_resources
x opsmanager/pivotal-container-service_properties
x opsmanager/pivotal-mysql_resources
x opsmanager/pivotal-mysql_properties
x opsmanager/cf_resources
x opsmanager/cf_properties
x opsmanager/p-compliance-scanner_resources
x opsmanager/p-compliance-scanner_properties
x opsmanager/ops_manager_vm_types
x opsmanager/ops_manager_diagnostic_report
x opsmanager/ops_manager_installations
x opsmanager/ops_manager_certificates
x opsmanager/ops_manager_certificate_authorities
x opsmanager/metadata
x usage_service/app_usage
x usage_service/service_usage
x usage_service/task_usage
x usage_service/metadata

7. Now let's view the output which is a SET of JSON files and to do that I simply use "cat" command and pipe that to JQ as shown below

$ cat ./output/opsmanager/ops_manager_installations | jq -r
{
  "installations": [
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "p-compliance-scanner-a53448be03a372a13d89",
          "identifier": "p-compliance-scanner",
          "label": "Compliance Scanner for PCF",
          "product_version": "1.0.0"
        }
      ],
      "deletions": [],
      "finished_at": "2019-08-30T09:38:29.679Z",
      "id": 25,
      "started_at": "2019-08-30T09:21:44.810Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [
        {
          "change_type": "deletion",
          "deployment_status": "pending",
          "guid": "p-compliance-scanner-1905a6707e4f434e315a",
          "identifier": "p-compliance-scanner",
          "label": "Compliance Scanner for PCF",
          "product_version": "1.0.0-beta.25"
        }
      ],
      "finished_at": "2019-08-08T02:10:51.130Z",
      "id": 24,
      "started_at": "2019-08-08T02:09:10.290Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-07-18T12:27:54.301Z",
      "id": 23,
      "started_at": "2019-07-18T11:31:19.781Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "successful",
          "guid": "cf-3095a0a264aa5900d79f",
          "identifier": "cf",
          "label": "Small Footprint PAS",
          "product_version": "2.5.3"
        }
      ]
    },
    {
      "additions": [],
      "deletions": [
        {
          "change_type": "deletion",
          "deployment_status": "pending",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "finished_at": "2019-07-07T00:16:31.948Z",
      "id": 22,
      "started_at": "2019-07-07T00:04:32.974Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-07-07T00:02:12.003Z",
      "id": 21,
      "started_at": "2019-07-06T23:57:06.401Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ]
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "p-compliance-scanner-1905a6707e4f434e315a",
          "identifier": "p-compliance-scanner",
          "label": "Compliance Scanner for PCF",
          "product_version": "1.0.0-beta.25"
        }
      ],
      "deletions": [],
      "finished_at": "2019-06-10T09:23:19.595Z",
      "id": 20,
      "started_at": "2019-06-10T09:10:44.431Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "skipped",
          "guid": "aquasec-1b94477ae275ee81be58",
          "identifier": "aquasec",
          "label": "Aqua Security for PCF",
          "product_version": "1.0.0"
        }
      ],
      "deletions": [],
      "finished_at": "2019-06-06T17:38:18.396Z",
      "id": 19,
      "started_at": "2019-06-06T17:35:34.614Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "skipped",
          "guid": "aquasec-1b94477ae275ee81be58",
          "identifier": "aquasec",
          "label": "Aqua Security for PCF",
          "product_version": "1.0.0"
        }
      ],
      "deletions": [],
      "finished_at": "2019-06-06T17:33:18.545Z",
      "id": 18,
      "started_at": "2019-06-06T17:21:41.529Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T11:15:43.546Z",
      "id": 17,
      "started_at": "2019-06-04T10:49:57.969Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T10:44:04.018Z",
      "id": 16,
      "started_at": "2019-06-04T10:17:28.230Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T09:52:30.782Z",
      "id": 15,
      "started_at": "2019-06-04T09:48:45.867Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T09:21:17.245Z",
      "id": 14,
      "started_at": "2019-06-04T09:17:45.360Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T08:50:33.333Z",
      "id": 13,
      "started_at": "2019-06-04T08:47:09.790Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T08:32:44.772Z",
      "id": 12,
      "started_at": "2019-06-04T08:23:27.386Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T08:16:41.757Z",
      "id": 11,
      "started_at": "2019-06-04T08:13:54.645Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T01:53:50.594Z",
      "id": 10,
      "started_at": "2019-06-04T01:43:56.205Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ]
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T01:28:22.975Z",
      "id": 9,
      "started_at": "2019-06-04T01:24:52.587Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-03T08:37:25.961Z",
      "id": 8,
      "started_at": "2019-06-03T08:13:07.511Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ]
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "deletions": [],
      "finished_at": "2019-06-03T04:57:06.897Z",
      "id": 7,
      "started_at": "2019-06-03T03:52:13.705Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "pivotal-mysql-0e5d717f1c87c8095c9d",
          "identifier": "pivotal-mysql",
          "label": "MySQL for Pivotal Cloud Foundry v2",
          "product_version": "2.5.4-build.51"
        }
      ],
      "deletions": [],
      "finished_at": "2019-05-22T05:15:55.703Z",
      "id": 6,
      "started_at": "2019-05-22T04:09:49.841Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "cf-3095a0a264aa5900d79f",
          "identifier": "cf",
          "label": "Small Footprint PAS",
          "product_version": "2.5.3"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-05-22T02:12:22.934Z",
      "id": 5,
      "started_at": "2019-05-22T01:45:28.101Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "cf-3095a0a264aa5900d79f",
          "identifier": "cf",
          "label": "Small Footprint PAS",
          "product_version": "2.5.3"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "failed",
          "guid": "cf-3095a0a264aa5900d79f",
          "identifier": "cf",
          "label": "Small Footprint PAS",
          "product_version": "2.5.3"
        }
      ],
      "deletions": [],
      "finished_at": "2019-05-22T00:23:29.844Z",
      "id": 4,
      "started_at": "2019-05-21T23:16:42.418Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-05-16T01:50:50.640Z",
      "id": 3,
      "started_at": "2019-05-16T01:45:22.438Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "successful",
          "guid": "pivotal-container-service-5c28f63410227c2221c8",
          "identifier": "pivotal-container-service",
          "label": "Enterprise PKS",
          "product_version": "1.4.0-build.31"
        }
      ]
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "pivotal-container-service-5c28f63410227c2221c8",
          "identifier": "pivotal-container-service",
          "label": "Enterprise PKS",
          "product_version": "1.4.0-build.31"
        }
      ],
      "deletions": [],
      "finished_at": "2019-05-15T00:08:32.241Z",
      "id": 2,
      "started_at": "2019-05-14T23:33:58.105Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "deletions": [],
      "finished_at": "2019-05-14T23:29:47.525Z",
      "id": 1,
      "started_at": "2019-05-14T23:13:13.244Z",
      "status": "succeeded",
      "unchanged": [],
      "updates": []
    }
  ]
}

Optionally you should send this TAR file output on every ticket/case your create so support has a great snapshot of what your ENV looks like to help diagnose support issues for you.

telemetry-collector send --path --api-key

For the API-KEY please contact your Pivotal AE or Platform Architect to request that as the Telemetry team issues API key to customer's


More Information 

https://docs.pivotal.io/telemetry/1-1/index.html
Categories: Fusion Middleware

Free Oracle Cloud: 14. Your Oracle Cloud Free Trial has expired (but FREE still running)

Dimitri Gielis - Thu, 2019-10-17 16:19
This post is the last post of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

Today I got an email that my Oracle Cloud account was Expired. While I have an Always FREE Oracle Cloud, when I signed up I also got some extra credits that lasted for a month. Those credits are no longer valid.


When you log in to your Oracle Cloud Dashboard you will get a notification on top too, but nothing to worry about.


It has some consequences tho, on the menu, some options are grayed out. The one I actually use is the Email Delivery, which seems to be grayed out too although normally you should be able to send 1,000 emails per month. So maybe grayed out also means, not full service.


When I checked it out, it said it's part of the paid plan. I remember some discussions at Oracle Open World where they recommend upgrading to a Paid account, but as you only use the Always FREE services, you are not charged.


So I decided to upgrade to a Paid account: Pay As You Go:


You have to provide a Credit Card, but that was a bit of an issue for me. Apparently, Safari is not really working well with this screen, so I switched to Chrome. The next hick-up I had was when I added my AMEX card... it said it was an invalid card.


Then I used my VISA card and that seemed to work well:

Click the Start Paid Account:


Finally, it will say your payment method will be reviewed and after that you are live.


It wasn't immediately clear for me I had to wait for the confirmation email, but when I went to Payment Method again, I saw the review was still in progress:


And a few minutes later I got the email that my account was upgraded:


When you look at your Oracle Cloud Dashboard, there's a cost calculator, so you see how much you have to pay. As long as I use the Always FREE components, I expect the amount to stay 0 :)


But the nice thing now is that you have access to all of Oracle Cloud again (e.g. Email Delivery).
Categories: Development

Funny Gamertags for Xbox That You Don’t Want to Miss

VitalSoftTech - Thu, 2019-10-17 09:49

Being a regular gamer on Xbox player, some cool and funny Gamertags are very easy to come by. In the gaming universe of Xbox, the two essential things you need are 90% smartness and 10% luck, and you will come out the conqueror of the game. However, apart from skill and fortune, you need one […]

The post Funny Gamertags for Xbox That You Don’t Want to Miss appeared first on VitalSoftTech.

Categories: DBA Blogs

All Nippon Airways Builds a Bridge between Japan and the Rest of the World

Oracle Press Releases - Thu, 2019-10-17 09:00
Blog
All Nippon Airways Builds a Bridge between Japan and the Rest of the World

By Guest Author—Oct 17, 2019

All Nippon Airways (ANA) is the largest airline in Japan and one of the world’s leading carriers, carrying over 47 million passengers annually. Japan is already this decade’s fastest-growing major destination for tourism and by 2030, the country expects the number of international visitors to double to 60 million per year.

With many people expected to visit Tokyo for the Olympic and Paralympic Games next year, ANA wishes to bridge Japan and the rest of the world, providing a pleasant air-travel experience for all. “For ANA, this presents an opportunity to showcase our world-class services on a global stage,” says Manabu Yada, Manager, Corporate Office Procurement, Material & Services at ANA.

But the airline industry is crowded, and even at its size, ANA still needs to find ways to stand out. “The challenge we faced was finding a way to ensure our customers could trust they’re getting the best service in the market, while we remain profitable. To adopt to changes in a global business environment and make important business decisions fast, we needed to develop a system to visualize all procurement processes, comply with complex regulations, and optimize all procurement costs,” Yada said.

ANA implemented Oracle ERP Cloud to streamline how it approves invoices and manages its suppliers through automation and social collaboration. “Oracle ERP Cloud covers procurement processes and reduces required transactions. It also provides the tools to improve efficiency and productivity of our procurement team. In addition, it shows us the history of all transactions so we can reduce costs and risks of all processes and increase profitability, while staying compliant with all related regulations,” said Yada.

The airline also intends to use Oracle ERP Cloud for its 88 branches in the future and aims to cut procurement costs for indirect materials by 5%.

Watch the All Nippon Airways video to learn how it is delivering the perfect customer experience


Read MoreStories from Oracle Cloud

All Nippon Airways is one of the thousands of customers on its journey to the cloud. Read about others in Stories from Oracle Cloud: Business Successes.

Oracle Named a Leader in Digital Commerce for Ten Consecutive Evaluations

Oracle Press Releases - Thu, 2019-10-17 07:00
Press Release
Oracle Named a Leader in Digital Commerce for Ten Consecutive Evaluations Oracle evaluated for its completeness of vision and ability to execute in Gartner Magic Quadrant

Redwood Shores, Calif.—Oct 17, 2019

Oracle has been recognized as a Leader in the Gartner Magic Quadrant for Digital Commerce for the 10th consecutive time.* The report focuses on transformational technologies and approaches in the digital commerce market that are delivering on the future needs of sellers and their customers. The report evaluates vendors based on completeness of vision and ability to execute. A complimentary copy of the report is available here.

“Our number one priority is to create agility for customers as the time to implement, adjust and innovate is paramount. That is why we designed Oracle CX Commerce to be highly customizable using cloud-based extensibility approaches and technologies,” said Ian Davis, vice president of product management, Oracle CX Commerce. “Our customers have complete flexibility whether they want a highly-curated Site Design or are looking to implement a Headless application based on our API-first solution. We do all of this in a backward-compatible way that allows Oracle to deliver more features consistently so customers can easily adopt without adding risk.”

Oracle CX Commerce (or Commerce Cloud as noted in the report) helps online businesses lower their total cost of ownership (TCO) with out-of-the-box essentials and intuitive business tools that support B2C and B2B multisite management, localization, price and promotions, merchandising, search, personalization, and rich asset management. With an API-first modern architecture, businesses can extend captivating buying experiences across any digital display—from mobile and web to voice, AR/VR, and IoT—to increase customer satisfaction with a library of predefined omnichannel journeys, including marketing, sales, returns, service and preorder.

“We consistently see our customers gaining positive momentum, and we attribute this progress to a concerted effort to minimize silos and put the customer at the center of their retail operations,” said Jeff Warren, vice president of strategy and solution management for Oracle Retail. “With commerce being the first touchpoint in most omnichannel journeys, it’s critical to help ensure the data associated with commerce interactions is seamlessly and effortlessly integrated into all aspects of the retail enterprise—from merchandise plan to stores, and supply chain fulfillment. We believe Oracle’s continued placement as a Leader in this Magic Quadrant is a testament to the value we are delivering to our customers on this front.”

Gartner, Magic Quadrant for Digital Commerce, Penny Gillespie, Christina Klock, Mike Lowndes, Sandy Shen, Jason Daigler, Yanna Dharmasthira, August 22, 2019.
*Oracle was previously listed as ATG because Oracle announced its acquisition of the company in November 2010.

Contact Info
Kimberly Guillon
Oracle
209.601.9152
kim.guillon@oracle.com
Gartner Disclaimer

Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About Oracle CX

Oracle CX Commerce is part of Oracle Customer Experience (CX), which empowers organizations to take a smarter approach to customer experience management and business transformation initiatives. By providing a trusted business platform that connects data, experiences, and outcomes, Oracle CX Suite helps customers reduce IT complexity, deliver innovative customer experiences and achieve predictable and tangible business results.

About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. Oracle Retail offers an unmatched suite of applications that empower retailers to pivot their operations to customer, adopt next-practice retail, and invest with intent. For more information, visit www.oracle.com/industries/retail/.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kimberly Guillon

  • 209.601.9152

Excel Roundup Function – A Step By Step Tutorial

VitalSoftTech - Wed, 2019-10-16 10:28

Let’s face it; it’s all fun and games when you are playing with the numbers, but as soon as the decimals start to make an appearance, things start to take a turn. But it doesn’t necessarily have to be that way. With applications like Microsoft Excel at your disposal, handling the decimals has never been […]

The post Excel Roundup Function – A Step By Step Tutorial appeared first on VitalSoftTech.

Categories: DBA Blogs

Creating a customized PostgreSQL container using buildah

Yann Neuhaus - Wed, 2019-10-16 09:09

Quite some time ago I blogged about how you could build your customzized PostgreSQL container by using a Dockerfile and Docker build. In the meantime Red Hat replaced Docker in OpenShift and SUSE replaced Docker as well in CaaS. As a consequence there need to be other ways of building containers and one of them is buildah. You can use buildah to build from a Docker file as well, but in this post we will use a simple bash script to create the container.

We start be defining four variables that define PGDATA, the PostgreSQL major version, the full version string and the minor version which will be used to create our standard installation location (these will also go into the entrypoint, see below):

#!/bin/bash
_PGDATA="/u02/pgdata"
_PGMAJOR=12
_PGVERSION=12.0
_PGMINOR="db_0"

As mentioned in the beginning buildah will be used to create the container. For running the container we need something else, and that is podman. You can run the container buildah creates with plain Docker as well, if you want, as it is oci compliant but as Red Hat does not ship Docker anymore we will use the recommended way of doing it by using podman. So the natural next step in the script is do install buildah and podman:

dnf install -y buildah podman

Buildah can create containers from scratch, which means you start with a container that contains nothing except some meta data:

newcontainer=$(buildah from scratch)

Once we have the new scratch container it gets mounted so dnf can be used to install the packages we need into the container without actually using dnf in the container:

scratchmnt=$(buildah mount $newcontainer)
ls -la $scratchmnt
dnf install --installroot $scratchmnt --releasever 8 bash coreutils gcc openldap-devel platform-python-devel readline-devel bison flex perl-ExtUtils-Embed zlib-devel openssl-devel pam-devel libxml2-devel libxslt-devel bzip2 wget policycoreutils-python-utils make tar --setopt install_weak_deps=false --setopt=tsflags=nodocs --setopt=override_install_langs=en_US.utf8 -y

Using “buildah config” the container can be configured. Here it is about the author, environment variables, the default user and the entrypoint that will be used once the conatiner will be started:

buildah config --created-by "dbi services"  $newcontainer
buildah config --author "dbi services" --label name=dbiservices $newcontainer
buildah run $newcontainer groupadd postgres
buildah run $newcontainer useradd -g postgres -m postgres
buildah config --user postgres $newcontainer
buildah config --workingdir /home/postgres $newcontainer
buildah config --env PGDATABASE="" $newcontainer
buildah config --env PGUSERNAME="" $newcontainer
buildah config --env PGPASSWORD="" $newcontainer
buildah config --env PGDATA=${_PGDATA} $newcontainer
buildah config --env PGMAJOR=${_PGMAJOR} $newcontainer
buildah config --env PGMINOR=${_PGMINOR} $newcontainer
buildah config --env PGVERSION=${_PGVERSION} $newcontainer
buildah config --entrypoint /usr/bin/entrypoint.sh $newcontainer
buildah copy $newcontainer ./entrypoint.sh /usr/bin/entrypoint.sh
buildah run $newcontainer chmod +x /usr/bin/entrypoint.sh

What follows is basically installing PostgreSQL from source code:

buildah run --user root $newcontainer mkdir -p /u01 /u02
buildah run --user root $newcontainer chown postgres:postgres /u01 /u02
buildah run --user postgres $newcontainer wget https://ftp.postgresql.org/pub/source/v${_PGVERSION}/postgresql-${_PGVERSION}.tar.bz2 -O /home/postgres/postgresql-${_PGVERSION}.tar.bz2
buildah run --user postgres $newcontainer /usr/bin/bunzip2 /home/postgres/postgresql-${_PGVERSION}.tar.bz2
buildah run --user postgres $newcontainer /usr/bin/tar -xvf /home/postgres/postgresql-${_PGVERSION}.tar -C /home/postgres/
buildah run --user postgres $newcontainer /home/postgres/postgresql-12.0/configure --prefix=/u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR} --exec-prefix=/u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR} --bindir=/u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin --libdir=/u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/lib --includedir=/u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/include 
buildah run --user postgres $newcontainer /usr/bin/make -C /home/postgres all
buildah run --user postgres $newcontainer /usr/bin/make -C /home/postgres install
buildah run --user postgres $newcontainer /usr/bin/make -C /home/postgres/contrib install

Containers shoud be as small as possible so lets do some cleanup:

buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/postgresql-${_PGVERSION}.tar
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/config
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/config.log
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/config.status
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/contrib
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/GNUmakefile
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/postgresql-12.0
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/src
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/doc
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/Makefile
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/.wget-hsts

When you want to run PostgreSQL inside a container you do not need any of the following binaries, so these can be cleaned as well:

buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/vacuumlo
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/vacuumdb
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/reindexdb
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pgbench
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_waldump
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_test_timing
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_test_fsync
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_standby
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_restore
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_recvlogical
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_receivewal
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_isready
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_dumpall
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_dump
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_checksums
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_basebackup
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_archivecleanup
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/oid2name
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/dropuser
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/dropdb
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/createuser
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/createdb
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/clusterdb

Last, but not least remove all the packages we do not require anymore and get rid of the dnf cache:

dnf remove --installroot $scratchmnt --releasever 8 gcc openldap-devel readline-devel bison flex perl-ExtUtils-Embed zlib-devel openssl-devel pam-devel libxml2-devel libxslt-devel bzip2 wget policycoreutils-python-utils make tar -y
dnf clean all -y --installroot $scratchmnt --releasever 8
# Clean up yum cache
if [ -d "${scratchmnt}" ]; then
rm -rf "${scratchmnt}"/var/cache/yum
fi
buildah unmount $newcontainer

Ready to publish the container:

buildah commit $newcontainer dbi-postgres

When you put all those steps into a script and run that you should see the just created container:

[root@doag2019 ~]$ buildah containers
CONTAINER ID  BUILDER  IMAGE ID     IMAGE NAME                       CONTAINER NAME
47946e4b4fc8     *                  scratch                          working-container
[root@doag2019 ~]$

… but now we also have a new image that can be started:

IMAGE NAME                                               IMAGE TAG            IMAGE ID             CREATED AT             SIZE
localhost/dbi-postgres                                   latest               dfcd3e8d5273         Oct 13, 2019 13:22     461 MB

Once we start that the entrypoint will be executed:

#!/bin/bash
# this are the environment variables which need to be set
PGDATA=${PGDATA}/${PGMAJOR}
PGHOME="/u01/app/postgres/product/${PGMAJOR}/${PGMINOR}"
PGAUTOCONF=${PGDATA}/postgresql.auto.conf
PGHBACONF=${PGDATA}/pg_hba.conf
PGDATABASENAME=${PGDATABASE}
PGUSERNAME=${PGUSERNAME}
PGPASSWD=${PGPASSWORD}
# create the database and the user
_pg_create_database_and_user()
{
${PGHOME}/bin/psql -c "create user ${PGUSERNAME} with login password '${PGPASSWD}'" postgres
${PGHOME}/bin/psql -c "create database ${PGDATABASENAME} with owner = ${PGUSERNAME}" postgres
${PGHOME}/bin/psql -c "create extension pg_stat_statements" postgres
}
# start the PostgreSQL instance
_pg_prestart()
{
${PGHOME}/bin/pg_ctl -D ${PGDATA} -w start
}
# Start PostgreSQL without detaching 
_pg_start()
{
exec ${PGHOME}/bin/postgres "-D" "${PGDATA}"
}
# stop the PostgreSQL instance
_pg_stop()
{
${PGHOME}/bin/pg_ctl -D ${PGDATA} stop -m fast
}
# initdb a new cluster
_pg_initdb()
{
${PGHOME}/bin/initdb -D ${PGDATA} --data-checksums
}
# adjust the postgresql parameters
_pg_adjust_config() {
if [ -z $PGMEMORY ]; then MEM="128MB"
else                      MEM=$PGMEMORY; fi
# PostgreSQL parameters
echo "shared_buffers='$MEM'" >> ${PGAUTOCONF}
echo "effective_cache_size='128MB'" >> ${PGAUTOCONF}
echo "listen_addresses = '*'" >> ${PGAUTOCONF}
echo "logging_collector = 'off'" >> ${PGAUTOCONF}
echo "log_truncate_on_rotation = 'on'" >> ${PGAUTOCONF}
echo "log_line_prefix = '%m - %l - %p - %h - %u@%d '" >> ${PGAUTOCONF}
echo "log_directory = 'pg_log'" >> ${PGAUTOCONF}
echo "log_min_messages = 'WARNING'" >> ${PGAUTOCONF}
echo "log_autovacuum_min_duration = '60s'" >> ${PGAUTOCONF}
echo "log_min_error_statement = 'NOTICE'" >> ${PGAUTOCONF}
echo "log_min_duration_statement = '30s'" >> ${PGAUTOCONF}
echo "log_checkpoints = 'on'" >> ${PGAUTOCONF}
echo "log_statement = 'none'" >> ${PGAUTOCONF}
echo "log_lock_waits = 'on'" >> ${PGAUTOCONF}
echo "log_temp_files = '0'" >> ${PGAUTOCONF}
echo "log_timezone = 'Europe/Zurich'" >> ${PGAUTOCONF}
echo "log_connections=on" >> ${PGAUTOCONF}
echo "log_disconnections=on" >> ${PGAUTOCONF}
echo "log_duration=off" >> ${PGAUTOCONF}
echo "client_min_messages = 'WARNING'" >> ${PGAUTOCONF}
echo "wal_level = 'replica'" >> ${PGAUTOCONF}
echo "wal_compression=on" >> ${PGAUTOCONF}
echo "max_replication_slots=20" >> ${PGAUTOCONF}
echo "max_wal_senders=20" >> ${PGAUTOCONF}
echo "hot_standby_feedback = 'on'" >> ${PGAUTOCONF}
echo "cluster_name = '${PGDATABASENAME}'" >> ${PGAUTOCONF}
echo "max_replication_slots = '10'" >> ${PGAUTOCONF}
echo "work_mem=8MB" >> ${PGAUTOCONF}
echo "maintenance_work_mem=64MB" >> ${PGAUTOCONF}
echo "shared_preload_libraries='pg_stat_statements'" >> ${PGAUTOCONF}
echo "autovacuum_max_workers=6" >> ${PGAUTOCONF}
echo "autovacuum_vacuum_scale_factor=0.1" >> ${PGAUTOCONF}
echo "autovacuum_vacuum_threshold=50" >> ${PGAUTOCONF}
echo "archive_mode=on" >> ${PGAUTOCONF}
echo "archive_command='/bin/true'" >> ${PGAUTOCONF}
# Authentication settings in pg_hba.conf
echo "host    all             all             0.0.0.0/0            md5"  >> ${PGHBACONF}
}
# initialize and start a new cluster
_pg_init_and_start()
{
# initialize a new cluster
_pg_initdb
# set params and access permissions
_pg_adjust_config
# start the new cluster
_pg_prestart
# set username and password
_pg_create_database_and_user
# restart database with correct pid
_pg_stop
_pg_start
}
# check if $PGDATA exists
if [ -e ${PGDATA} ]; then
# when $PGDATA exists we need to check if there are files
# because when there are files we do not want to initdb
if [ -e "${DEBUG}" ]; then
/bin/bash
elif [ -e "${PGDATA}/base" ]; then
# when there is the base directory this
# probably is a valid PostgreSQL cluster
# so we just start it
_pg_start
else
# when there is no base directory then we
# should be able to initialize a new cluster
# and then start it
_pg_init_and_start
fi
else
# create PGDATA
mkdir -p ${PGDATA}
# initialze and start the new cluster
_pg_init_and_start
fi

Starting that up using podman:

[root@doag2019 ~]$ podman run -e PGDATABASE=test -e PGUSERNAME=test -e PGPASSWORD=test --detach -p 5432:5432 localhost/dbi-postgres
f933df8216de83b3c2243860ace02f231748a05273c16d3ddb0308231004552f
CONTAINER ID  IMAGE                          COMMAND               CREATED             STATUS             PORTS                   NAMES
f933df8216de  localhost/dbi-postgres:latest  /bin/sh -c /usr/b...  About a minute ago  Up 59 seconds ago  0.0.0.0:5432->5432/tcp  nervous_leavitt

… and connecting from the host system:

[root@doag2019 ~]$ psql -p 5432 -h localhost -U test test
Password for user test:
psql (10.6, server 12.0)
WARNING: psql major version 10, server major version 12.
Some psql features might not work.
Type "help" for help.
test=> select version();
version
--------------------------------------------------------------------------------------------------------
PostgreSQL 12.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
(1 row)
test=> \q

One you have that scripted and ready it is a very convinient way for creating images. What I like most is, that you can make changes afterwards without starting from scratch:

[root@doag2019 ~]$ podman inspect localhost/dbi-postgres
[
{
"Id": "dfcd3e8d5273116e5678806dfe7bbf3ca2276549db73e62f27b967673df8084c",
"Digest": "sha256:b2d65e569becafbe64e8bcb6d49b065188411f596c04dea2cf335f677e2db68e",
"RepoTags": [
"localhost/dbi-postgres:latest"
],
"RepoDigests": [
"localhost/dbi-postgres@sha256:b2d65e569becafbe64e8bcb6d49b065188411f596c04dea2cf335f677e2db68e"
],
"Parent": "",
"Comment": "",
"Created": "2019-10-13T11:22:15.096957689Z",
"Config": {
"User": "postgres",
"Env": [
"PGDATABASE=",
"PGUSERNAME=",
"PGPASSWORD=",
"PGDATA=/u02/pgdata",
"PGMAJOR=12",
"PGMINOR=db_0",
"PGVERSION=12.0"
],
"Entrypoint": [
"/bin/sh",
"-c",
"/usr/bin/entrypoint.sh"
],
"WorkingDir": "/home/postgres",
"Labels": {
"name": "dbiservices"
}
},
"Version": "",
"Author": "dbiservices",
"Architecture": "amd64",
"Os": "linux",
"Size": 460805033,
"VirtualSize": 460805033,
"GraphDriver": {
"Name": "overlay",
"Data": {
"MergedDir": "/var/lib/containers/storage/overlay/89de699f19781bb61eec12cf61a097a9daa31d7725fc3c078c76d0d6291cb074/merged",
"UpperDir": "/var/lib/containers/storage/overlay/89de699f19781bb61eec12cf61a097a9daa31d7725fc3c078c76d0d6291cb074/diff",
"WorkDir": "/var/lib/containers/storage/overlay/89de699f19781bb61eec12cf61a097a9daa31d7725fc3c078c76d0d6291cb074/work"
}
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:89de699f19781bb61eec12cf61a097a9daa31d7725fc3c078c76d0d6291cb074"
]
},
"Labels": {
"name": "dbiservices"
},
"Annotations": {},
"ManifestType": "application/vnd.oci.image.manifest.v1+json",
"User": "postgres",
"History": [
{
"created": "2019-10-13T11:22:15.096957689Z",
"created_by": "dbi services",
"author": "dbiservices"
}
]
}
]

Assume we want to add a new environment variable. All we need to do is this:

[root@doag2019 ~]$ buildah containers
CONTAINER ID  BUILDER  IMAGE ID     IMAGE NAME                       CONTAINER NAME
47946e4b4fc8     *                  scratch                          working-container
[root@doag2019 ~]$ buildah config --env XXXXXXX="xxxxxxxx" 47946e4b4fc8
[root@doag2019 ~]$ buildah commit 47946e4b4fc8 dbi-postgres
Getting image source signatures
Skipping fetch of repeat blob sha256:9b74f2770486cdb56539b4a112b95ad7e10aced3a2213d33878f8fd736b5c684
Copying config sha256:e2db86571bfa2e64e6079077fe023e38a07544ccda529ba1c3bfc04984f2ac74
606 B / 606 B [============================================================] 0s
Writing manifest to image destination
Storing signatures
e2db86571bfa2e64e6079077fe023e38a07544ccda529ba1c3bfc04984f2ac74

The new image with the new variable is ready:

[root@doag2019 ~]$ buildah images
IMAGE NAME                                               IMAGE TAG            IMAGE ID             CREATED AT             SIZE
                                                                              dfcd3e8d5273         Oct 13, 2019 13:22     461 MB
localhost/dbi-postgres                                   latest               e2db86571bfa         Oct 13, 2019 13:52     461 MB
[root@doag2019 ~]$ buildah inspect localhost/dbi-postgres
...
"Env": [
"PGDATABASE=",
"PGUSERNAME=",
"PGPASSWORD=",
"PGDATA=/u02/pgdata",
"PGMAJOR=12",
"PGMINOR=db_0",
"PGVERSION=12.0",
"XXXXXXX=xxxxxxxx"
],
...

Nice. If you are happy with the image the scratch container can be deleted.

Cet article Creating a customized PostgreSQL container using buildah est apparu en premier sur Blog dbi services.

Clustering_Factor

Jonathan Lewis - Wed, 2019-10-16 08:07

Originally drafted July 2018

“How do you find out what the clustering_factor of an index would be without first creating the index ?”

I’m not sure this is really a question worth asking or answering[1], but since someone asked it (and given the draft date I have no idea who, where, when or why), here’s an answer for simple heap tables in the good old days before Oracle made public the table_cached_blocks preference. It works by sorting the columns you want in the index together with the table rowid, and then comparing the file/block component of the rowid (cutting the relevant characters from the string representation of the rowid) with the previous one to see if the current row is in the same block as the previous row.  If the row is in a different block we count one, otherwise zero.  Finally we sum the ones.

In the demo I’ve copied a little data from view all_objects, with the intention of creating an index on object_name. After running my counting query I’ve then created the index and checked its clustering_factor to show that I’ve got a match.


rem
rem     Script: clustering_factor_est.sql
rem     Author: J.P.Lewis
rem     Dated:  July 2018
rem
create table t1
as
select  *
from    all_objects
where   rownum <= 10000
;

prompt  ====================
prompt  Tablescan with lag()
prompt  ====================

select
        sum(count_chg)
from    (
select
        case
                when substr(rowid,7,9) <> lag(substr(rowid,7,9),1,'000000000') over(order by object_name, rowid)
                        then 1
                        else 0
        end     count_chg
from t1
)
;

prompt  ======================
prompt  create index and check
prompt  ======================

create index t1_i1 on t1(object_name);

select  clustering_factor
from    user_indexes 
where   table_name = 'T1'
and     index_name = 'T1_I1'
;

Pasting the output from running the above:


Table created.

====================
Tablescan with lag()
====================

SUM(COUNT_CHG)
--------------
          3901

1 row selected.

======================
create index and check
======================

Index created.


CLUSTERING_FACTOR
-----------------
             3901

1 row selected.


This will work for a global index on a partitioned table, but will give meaningless answers for globally partitioned indexes and local indexes. Furthermore it’s not a mechanism that lends itself to calculating the clustering_factor if you’ve set the table_cached_blocks preference to a value other than 1.

[1] Given the amount of work it would take to run the query to check the clustering_factor you might as well create the index (invisible, perhaps and nologging if that doesn’t cause problems) – which also gives you the option for setting the table_cached_blocks and gathering_stats to see how the clustering_factor varies.

Update (shortly after posting)

It has occurred to me that if you have a “nosegment” index that you’ve been using to test whether or not the optimizer would use it IF you created it, then the one piece of information that is missing from the data dicionary for that index is its clustering_factor (For a multicolumn index you can get a valid value for distinct_keys by creating a column group on the set of columns – which would otherwise be faked when you create the nosegment index.) This code might allow you to write a clustering_factor to the index definition that would give you a better idea of whether the optimizer would use it if you created it.  (To be tested.)

 

 

Which Version Number Corresponds Which PeopleTools Object?

David Kurtz - Tue, 2019-10-15 10:15
Recently somebody asked me "Our “CRM” values in PSLOCK and PSVERSION are growing tremendously and we don’t know why. We will be opening a case with Oracle but … do you know what that “CRM” value is related to? We are not using the CRM product in our PeopleSoft installation."
There is some documentation on the Oracle Support website, but it is not exhaustive and may not be completely up to date.
The page in the PTRef utility that describes the relationship of version numbers to PeopleTools tables is one of the few static pages in the tool.  I have now updated it with the information in the above Oracle support notes, but there are other version numbers.
In the previous blog post, I showed how to increment version numbers before updating PeopleTools objects.  I knew RDM (the Record Definition Manager) is the OBJECTTYPENAME for PSRECDEFN because I worked that out by tracing Application Designer while it saved a record change.  That remains the only completely reliable way to determine the relationship.
However, version number matching is also a useful technique, though when it does not provide a completely definitive answer, it dramatically cuts down the amount of work then necessary.
I have written a little PL/SQL code, that is delivered with PTRef, that extracts the maximum version number for each PeopleTools table in PeopleSoft (other than the PS%DEL) tables and stores it on a working storage table (I used the PLAN_TABLE because it is always there on an Oracle database).  Then you can compare the version number on PSLOCK/PSVERSION with the maximum version on the PeopleTools object.
If the version number is 1, you can’t do the match because the version number has never been incremented, at least since it was last reset by the VERSION application engine.
If the version is only updated occasionally you may have some trouble with duplicate matches. In my example, 3 tables have a maximum version number of 80, while only one version number is 80.
RECNAME                   MAX        CNT
------------------ ---------- ----------
PSAEAPPLDEFN 80 3504
PSMSGNODEDEFN 80 78
PSOPRVERDFN 80 1468
I would guess that OPRVM matches PSOPRVERDFN, and the above support notes confirm this, but otherwise, you would have to check that manually with trace.
OBJECTTY    VERSION
-------- ----------
OPRVM 80
Higher version numbers are easier to match because they are less likely to have duplicate matches.
So to return to the original question, what is CRM?  In my sample system, version numbers CRM and SCATM are both 3.
OBJECTTY    VERSION
-------- ----------
CRM 3
SCATM 3
However, only PSCONTDEFN has a maximum version of 3.
RECNAME                   MAX        CNT
------------------ ---------- ----------
PSCONTDEFN 3 7567
Again not a perfect match, but again Oracle Support Note 664848.1 confirms that CRM corresponds to PSCONTDEFN.  CRM stands for Content Registry Manager.
So the question now becomes what is updating the content definitions, and hence increasing the version number?  It turned out to be an entity registry synchronisation process that was run daily.
It is perfectly legitimate for many updated rows on the PeopleTools table can be given the same version number.  The version number does not need to be incremented again for each row being updated, but then the row-level locks on PSVERSION and PSLOCK created by their updates must be held until the update on the PeopleTools table is committed.  That can increase contention on the version number update.  The alternative is to commit after each update and then increment the version numbers again.  Many PeopleSoft processes do exactly that, and it can, in turn, lead to massive increase in some version numbers.

Marks & Spencer Sparks, a Walkthrough

VitalSoftTech - Tue, 2019-10-15 10:02

As North Americans have their Walmart & Targets, the British have their own one-stop shop called Marks & Spencer established over one hundred years ago. It is a retail store for all home goods. It is a household name in Britain that caters to all kinds of domestic needs. A business as large as this […]

The post Marks & Spencer Sparks, a Walkthrough appeared first on VitalSoftTech.

Categories: DBA Blogs

New Study: 64% of People Trust a Robot More Than Their Manager

Oracle Press Releases - Tue, 2019-10-15 08:00
Press Release
New Study: 64% of People Trust a Robot More Than Their Manager Global research highlights how AI is changing the relationship between people and technology at work

Redwood Shores, Calif.—Oct 15, 2019

People have more trust in robots than their managers, according to the second annual AI at Work study conducted by Oracle and Future Workplace, a research firm preparing leaders for disruptions in recruiting, development and employee engagement. The study of 8,370 employees, managers and HR leaders across 10 countries, found that AI has changed the relationship between people and technology at work and is reshaping the role HR teams and managers need to play in attracting, retaining and developing talent.

AI is Changing the Relationship Between People and Technology at Work

Contrary to common fears around how AI will impact jobs, employees, managers and HR leaders across the globe are reporting increased adoption of AI at work and many are welcoming AI with love and optimism.

  • AI is becoming more prominent with 50 percent of workers currently using some form of AI at work compared to only 32 percent last year. Workers in China (77 percent) and India (78 percent) have adopted AI over 2X more than those in France (32 percent) and Japan (29 percent).
  • The majority (65 percent) of workers are optimistic, excited and grateful about having robot co-workers and nearly a quarter report having a loving and gratifying relationship with AI at work.
  • Workers in India (60 percent) and China (56 percent) are the most excited about AI, followed by the UAE (44 percent), Singapore (41 percent), Brazil (32 percent), Australia/New Zealand (26 percent), Japan (25 percent), U.S. (22 percent), UK (20 percent) and France (8 percent).
  • Men have a more positive view of AI at work than women with 32 percent of men optimistic vs. 23 percent of women.
  Workers Trust Robots More Than Their Managers

The increasing adoption of AI at work is having a significant impact on the way employees interact with their managers. As a result, the traditional role of HR teams and the manager is shifting.

  • 64 percent of people would trust a robot more than their manager and half have turned to a robot instead of their manager for advice.
  • Workers in India (89 percent) and China (88 percent) are more trusting of robots over their managers, followed by Singapore (83 percent), Brazil (78%), Japan (76 percent), UAE (74 percent), Australia/New Zealand (58 percent), U.S. (57 percent), UK (54 percent) and France (56 percent).
  • More men (56 percent) than women (44 percent) have turned to AI over their managers.
  • 82% of people think robots can do things better than their managers.
  • When asked what robots can do better than their managers, survey respondents said robots are better at providing unbiased information (26 percent), maintaining work schedules (34 percent), problem solving (29 percent) and managing a budget (26 percent).
  • When asked what managers can do better than robots, workers said the top three tasks were understanding their feelings (45 percent), coaching them (33 percent) and creating a work culture (29 percent).
  AI is Here to Stay: Organizations Need to Simplify and Secure AI to Stay Competitive

The impact of AI at work is only just beginning and in order to take advantage of the latest advancements in AI, organizations need to focus on simplifying and securing AI at work or risk being left behind.

  • 76 percent of workers (and 81 percent of HR leaders) find it challenging to keep up with the pace of technological changes in the workplace.
  • Workers want a simplified experience with AI at work, asking for a better user interface (34 percent), best practice training (30 percent) and an experience that is personalized to their behavior (30 percent).
  • Security (31 percent) and privacy (30 percent) are the main concerns preventing workers from using AI at work.
  • Digital natives Gen Z (43 percent) and Millennials (45 percent) are more concerned about privacy and security at work than Gen X (29 percent) and Baby Boomers (23 percent).
  Supporting quotes

“The latest advancements in machine learning and artificial intelligence are rapidly reaching mainstream, resulting in a massive shift in the way people across the world interact with technology and their teams. As this study shows, the relationship between humans and machines is being redefined at work, and there is no one-size-fits-all approach to successfully managing this change. Instead, organizations need to partner with their HR organization to personalize the approach to implementing AI at work in order to meet the changing expectations of their teams around the world,” said Emily He, SVP, Human Capital Management Cloud Business Group, Oracle

“Over the past two years we’ve found that workers have become more optimistic as they’ve adopted AI in the workplace and HR is leading the way. The 2019 study shows that AI is redefining not only the relationship between worker and manager, but also the role of a manager in an AI-driven workplace. Based on the findings, managers will remain relevant in the future if they focus on being human and using their soft skills, while leaving the technical skills and routine tasks to robots,” said Dan Schawbel, Research Director at Future Workplace.

"Our 2019 results reveal that forward looking companies are already capitalizing on the power of AI," said Jeanne Meister Founding Partner, Future Workplace. “As workers and managers leverage the power of artificial intelligence in the workplace, they are moving from fear to enthusiasm as they see the possibility of being freed of many of their routine tasks and having more time to solve critical business problems for the enterprise.”

Learn more about this global study here.

Methodology

Research findings are based on a global survey conducted by Savanta between July 2 to August 9, 2019. In total, 8,370 completed the survey. The study was administered online and fielded in 10 different countries (and in six languages). Permanent full-time employees between the ages 18-74 years old were eligible to participate. The survey targeted HR Leaders, Managers and Employees. Respondents are recruited through a number of different mechanisms, via different sources to join the panels and participate in market research surveys. All panelists have passed a double opt-in process and complete on average 300 profiling data points prior to taking part in surveys. Respondents are invited to take part via email and are provided with a small monetary incentive for doing so.

Contact Info
Celina Bertallee
Oracle
559-283-2425
celina.bertallee@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About Future Workplace

Future Workplace is an executive development firm dedicated to rethinking and re-imagining the workplace. Future Workplace works with heads of talent management, human resources, corporate learning, and diversity to prepare for the changes impacting recruitment, employee development, and engagement. Future Workplace is host to the 2020 Workplace Network, an Executive Council that includes 50 plus heads of Corporate Learning, Talent, and Human Resources who come together to discuss debate and share “next” practices impacting the workplace and workforce of the future. For more information, please visit: http://www.futureworkplace.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Celina Bertallee

  • 559-283-2425

Oracle Equips Case Workers and Citizens to Address Complex Societal Problems

Oracle Press Releases - Tue, 2019-10-15 07:00
Press Release
Oracle Equips Case Workers and Citizens to Address Complex Societal Problems Comprehensive case management platform brings real-time data to bear on decisions by health and human services personnel.

Redwood Shores, Calif.—Oct 15, 2019

Taking a new people-centric approach to providing support and care, Oracle offers new services to citizens in need. New digital case management capabilities, now available in Oracle Customer Experience (CX) for Public Sector, provide a modern platform to help citizens, case workers, and third party social service providers deal with complex societal issues and achieve positive outcomes.

By offering real time access to vital data, Oracle’s solution will enable state and local organizations to take a more people-centric approach to providing support and care—an approach that has not previously been possible due to the limitations of older technologies. Care providers will be able to effectively launch new services to their citizens, and to easily manage complex interactions for cases that span many years and complex, changing family circumstances.

“Today’s digitally-empowered citizens demand easy access to health and human services agencies for critical support and care. And case workers expect accessible tools that can manage complex cases that often evolve over extended periods,” said Rob Tarkoff, executive vice president of Oracle CX product development. “Until now, there has been a serious gap in the comprehensiveness and sophistication of solutions to serve many of the needs of this market. With Oracle, case workers can now have real-time snapshots of client cases along with intelligent recommendations for next steps - even in the most complex situations. This enables workers to take immediate and potentially life-saving actions.”

The Oracle solution transforms traditional ways social service agencies have managed cases and served customers through the integration of modern digital technologies and channels. The platform integrates technologies spanning cloud, digital outreach, omni-channel service, case management, mobile, social, Internet of Things, and Artificial Intelligence while ensuring maximum security and information privacy.

The digital enablement of complex case management not only serves state and local health and human service organizations, but also other government agencies in areas as diverse as immigration, taxation, benefits, and city and state citizen engagement.

Oracle CX solutions for health and human services combine decades of experience with a broad cloud service portfolio that enables current IT infrastructures with a proven path to digital transformation. It offers comprehensive digital engagement solutions for several aspects of citizen engagement including real-time personalized advice, self-service portals with embedded knowledge management, and a digital application process for benefit programs which is accessible through mobile, web, social channels, chatbots, call centers, and local offices.

For more information, go to https://www.oracle.com/applications/customer-experience/industries/public-sector/

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Top Universities Make the Grade with Oracle Cloud

Oracle Press Releases - Tue, 2019-10-15 07:00
Press Release
Top Universities Make the Grade with Oracle Cloud DePaul University and Ithaca College look to Oracle to cut costs and streamline operations

EDUCAUSE, Chicago—Oct 15, 2019

Higher education institutions are rapidly digitizing to meet industry demands for new models of teaching, learning and research. Those institutions seeking to build capacity for long-term innovation are increasingly moving to the cloud to compete in this evolving environment, as institutions across the U.S. continue to look to Oracle Cloud to increase business efficiency and improve student outcomes.

Oracle brings unparalleled advantages and opportunities to the Higher Education market, including the ability to reduce operational and student costs and help ensure fiscal sustainability.

Universities recently adopting Oracle Cloud include:

  • Ithaca College, a private liberal arts college founded over 125 years ago, will consolidate all of its applications with Oracle to operate on a single cloud platform. It is currently implementing Oracle’s E-Business Suite and HCM Cloud solutions, and is expanding to automate and migrate its financial and planning functions to the cloud. With the addition of Oracle ERP Cloud and EPM Cloud, Ithaca College expects to see an increase in staff and business efficiency, along with enhanced analysis, monitoring and management reporting throughout the planning and budgeting process.

    “Oracle’s successful track record helping higher education institutions transition to the cloud inspired us to expand our partnership with them,” said David Weil, associate vice president and chief information officer, Ithaca College. “We’re confident that Oracle Cloud will enable us to streamline our operations to even better serve our students.”

  • DePaul University, the largest Catholic university in the country, will implement Oracle ERP Cloud and HCM Cloud and integrate them with its existing Oracle PeopleSoft Campus Solutions application. The interest to move to a modern system was based on the university’s desire to adopt best practices for business processes, and new capabilities such as AI and embedded intelligence to drive better efficiency and sustainability.

    “We are proud to be the first institution using Oracle’s PeopleSoft solution to migrate to cloud in the Chicagoland area,” said Bob McCormick, vice president for Information Services of DePaul University, “At DePaul, we are looking forward to modernizing our business and demonstrating leadership in the higher education industry.”

“From financial pressures, to rising student expectations, the higher education industry is rapidly evolving. Oracle is committed to helping these institutions adapt and thrive through the change,” said Keith Rajecki, vice president of Oracle Public Sector, Education and Research. “We are proud to be partnering with Ithaca College and DePaul University to provide them with the industry’s most complete, scalable cloud to transform their operations and provide the agility they need to enhance financial planning functions.”

Oracle’s EPM Suite provides data on demand, enabling institutions to use their time more effectively and provide real business value. The solution elevates business outcomes through embedded intelligence and advanced analytics, while also helping to lower costs and gain continuous innovation.

Oracle ERP Cloud provides accelerated business outcomes real-time data for predictive insights to improve your decision-making and performance management that will help you improve business performance and drive operational excellence across the institution.

Oracle’s HCM Cloud provides a very powerful single, global human resources solution to help processes and local compliance needs while also engaging workforces throughout the entire talent lifecycle.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Michigan State University Amplifies Student Success through the Cloud

Oracle Press Releases - Tue, 2019-10-15 07:00
Press Release
Michigan State University Amplifies Student Success through the Cloud Leading research university invests in Oracle Cloud

EDUCAUSE, Chicago—Oct 15, 2019

Michigan State University, a leading higher education and research institution, has engaged Oracle to replace its mainframe student information system infrastructure. This modernization project will help the university address challenges around fiscal sustainability, transforming the student experience and building capacity for long-term institutional innovation. To reach these goals, the university charted an incremental path to the cloud and will be implementing several Oracle Cloud solutions, including Oracle Student Financial Planning (SFP). Michigan State University also selected Exadata Cloud at Customer to operate PeopleSoft Campus Solutions in one hybrid cloud environment.

“Our partnership with Michigan State confirms our investment, commitment and success in the higher education market,” said Keith Rajecki, vice president of Oracle Public Sector, Education and Research, “We offer a complete and market-leading set of applications and technology and we are continuing to invest in research and development efforts.”

For the past 160 years, Michigan State has pushed the boundaries of discovery and forged partnerships to solve the most pressing global challenges while providing life-changing opportunities to a diverse and inclusive academic community. The university joins an increasing number of higher education institutions moving to the cloud following a practical path that maps to its goals of improving student outcomes and institutional standing, achieving operational efficiency, and optimizing decision making through enhanced insights, all while also reducing compliance costs.

“With higher education facing increasing student expectations, changing fiscal realities, and the need to future-proof their institutions, the adoption of cloud technologies and applications is proving to be a key component of overall institutional success,” said Vivian Wong, group vice president of Higher Education Development at Oracle. “Oracle’s unified stack of solutions, including infrastructure through to business and purpose-built higher education applications, offer institutions, like Michigan State, the ability to choose an incremental path to cloud—delivering immediate value while also considering their unique contextual requirements,” she concluded.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Understanding PeopleTools Object Version Numbers

David Kurtz - Mon, 2019-10-14 13:36
I was recently asked a question about PeopleSoft version numbers, but before I address that directly, I think it would be useful to explain what is their general purpose.
CachingThe PeopleSoft data model and application are mostly stored in the database in PeopleTools tables.  These tables are queried as the application executes.  For example, when you open a component, the component and pages, including all the PeopleCode, the definition of any records used, and so on have to be loaded into the component buffer.  Ultimately this information comes from the PeopleTools tables.  To save the overhead of repeatedly querying these tables, PeopleSoft caches this data locally in physical files the application server and process scheduler domains.  The application servers also cache some of this information in memory to save visiting the local physical cache.  Application Designer also maintains a physical cache.
Over time, as the application executes, the cache files build up.  Occasionally, when it is necessary to delete the cache files and then it becomes clear just how significant is the overhead of the PeopleTools queries as a period of poor performance is seen as the application builds up fresh cache files.
Physical cache files are created in directories in the application server and process scheduler Tuxedo domains.  By default, each process maintains its own private cache.  Separate directories of cache files are created for each type of PeopleSoft server process in each domain.    Pairs of cache files are created in each directory for each object type as needed.  There is also a CACHE.LOK file in each directory that is used to ensure that only one process is accessing that cache directory concurrently.
It is possible to run with a shared physical cache, but then it is read-only and must be pre-generated.  It is very rare to see this implemented, because everyone expects to continuously deliver changes over time, and if you had a shared cache you would have to deliver an updated set of shared cache file to every domain every time you delivered a new PeopleTools object.
The cache files come in pairs.  The name of the cache files is the Object Type Name.  This corresponds to the OBJECTTYPENAME on the PSLOCK and PSVERSION tables.  The .DAT file contains the data to be cached.  The .KEY file is an index for the .DAT file, and it also holds the version number of the cached object.

-rw------- 1 psadm2 oracle 5228492 Jun 12 06:37 RDM.DAT
-rw------- 1 psadm2 oracle 69120 Jun 12 06:37 RDM.KEY
-rw------- 1 psadm2 oracle 0 Oct 26 2015 ROLM.DAT
-rw------- 1 psadm2 oracle 24192 Oct 26 2015 ROLM.KEY
-rw------- 1 psadm2 oracle 0 Oct 26 2015 RSM.DAT
-rw------- 1 psadm2 oracle 24192 Oct 26 2015 RSM.KEY
Version Numbers
Version numbers track when a cached PeopleTools object has been changed, either by Application Designer, or a change in configuration, or the application.  The version numbers are sequences generated from two PeopleTools tables PSLOCK and PSVERSION that hold the highest version number for each type of object.  These two tables have the same structure.
SQL> desc psversion
Name Null? Type
----------------------------------------- -------- ----------------------------
OBJECTTYPENAME NOT NULL VARCHAR2(8 CHAR)
VERSION NOT NULL NUMBER(38)
There are now over 100 different version numbers, each with a specific object type name that each track a specific PeopleTools object.  There is a global version number, with the object type name of SYS, that is incremented whenever any other version number is incremented.
I have no idea why two identical tables of version numbers were created.  I can see no reason for this, but it has been like this since the version numbers were changed (if I remember correctly) in PeopleTools 7.  In early versions of PeopleTools, not all version numbers were on both tables, but in at least PeopleTools 8.55 only one object type appears on PSVERSION and not PSLOCK.
When an object is changed, the object and global version numbers are incremented, and the incremented object version number is recorded on the object in the PeopleTools table.  The version number on the object is also stored in the physical cache files when the object is cached.  If the version on the database is higher than that in the cache file, then the PeopleSoft process knows it must query the latest version from the PeopleTools table and update the cache file.
How to Update Version Numbers
It is not generally recommended, nor strictly speaking supported, to update PeopleTools tables directly with SQL.  Apart from the risk of updating them incorrectly, or to invalid values, you also need to ensure that the changes are picked up by PeopleTools processes and that they do not simply continue to read the cached data.  However, occasionally, it is the pragmatic way to doing something.  
Here is an example from Chapter 5 of PeopleSoft for the Oracle DBA that shows how to maintain version numbers so the change is picked up by PeopleTools processes.  I want to mark alternate search key indexes as unique where there is a unique key on a record because they are unique because the unique key is a subset of their columns.  Then Application Designer will build the indexes as unique.  
UPDATE psversion SET version = version + 1
WHERE objecttypename IN('SYS','RDM');

UPDATE pslock SET version = version + 1
WHERE objecttypename IN('SYS','RDM');

UPDATE psrecdefn
SET version = (
SELECT version FROM psversion WHERE objecttypename = 'RDM')
WHERE recname = '';

UPDATE psindexdefn a
SET a.uniqueflag = 1
WHERE a.uniqueflag = 0
AND a.indextype = 3
AND EXISTS(
SELECT 'x'
FROM psindexdefn k
WHERE k.recname = a.recname
AND k.indexid = '_'
AND k.indextype = 1
AND k.uniqueflag = 1)
AND a.recname = '';
I am updating a PeopleTools object (PSINDEXDEFN) that doesn't have a version number, but its parent is PSRECDEFN that does have a version number.  I happen to know that object type RDM (the Record Definition Manager) generates the version number for PSRECDEFN.  I found that out by tracing Application Designer while it saved a record change.  That is the only completely reliable method to determine which sequence is used for which record.  However, I will discuss another less onerous matching method in a subsequent blog post.
I must increment the RDM and SYS version numbers and write the new RDM version number to the updated rows on PSRECDEFN.  Next time a PeopleSoft process needs to read a record definition it will check the version numbers.  The increment of the SYS object tells PeopleSoft than an object number has changed, and then it will detect that the RDM version number has changed so it has to reload and cache objects with version numbers greater than the last cached version number for that object.

Basic Replication -- 7 : Refresh Groups

Hemant K Chitale - Fri, 2019-10-11 23:24
So far, all my blog posts in this series cover "single" Materialized Views (even if I have created two MVs, they are independent of each other and can be refreshed at different schedules).

A Refresh Group is what you would define if you want multiple MVs to be refreshed to the same point in time.  This allows for
(a) data from transaction that touch multiple tables
or
(b) views of multiple tables
to be consistent in the target MVs.

For example, if you have SALES_ORDER and LINE_ITEMS tables and the MVs on these are refreshed at different times, you might see the ORDER (Header) without the LINE_ITEMs (or, worse, in the absence of Referential Integrity constraints, LINE_ITEMs without the ORDER (Header) !).

Here's a demo, using the HR  DEPARTMENTS and EMPLOYEES table with corresponding MVs built in the HEMANT schema.

SQL> show user
USER is "HR"
SQL> select count(*) from departments;

COUNT(*)
----------
27

SQL> select count(*) from employees;

COUNT(*)
----------
107

SQL>
SQL> grant select on departments to hemant;

Grant succeeded.

SQL> grant select on employees to hemant;

Grant succeeded.

SQL>
SQL> create materialized view log on departments;

Materialized view log created.

SQL> grant select, delete on mlog$_departments to hemant;

Grant succeeded.

SQL>
SQL> create materialized view log on employees;

Materialized view log created.

SQL> grant select, delete on mlog$_employees to hemant;

Grant succeeded.

SQL>
SQL>


Having created the source MV Logs  note that I have to grant privileges to the account (HEMANT) that will be reading and deleting from the MV Logs.

Next, I setup the MVs and the Refresh Group

SQL> show user
USER is "HEMANT"
SQL>
SQL> select count(*) from hr.departments;

COUNT(*)
----------
27

SQL> select count(*) from hr.employees;

COUNT(*)
----------
107

SQL>
SQL>
SQL> create materialized view mv_dept
2 refresh fast on demand
3 as select department_id as dept_id, department_name as dept_name
4 from hr.departments
5 /

Materialized view created.

SQL>
SQL> create materialized view mv_emp
2 refresh fast on demand
3 as select department_id as dept_id, employee_id as emp_id,
4 first_name, last_name, hire_date
5 from hr.employees
6 /

Materialized view created.

SQL>
SQL> select count(*) from mv_dept;

COUNT(*)
----------
27

SQL> select count(*) from mv_emp;

COUNT(*)
----------
107

SQL>
SQL> execute dbms_refresh.make(-
> name=>'HR_MVs',-
> list=>'MV_DEPT,MV_EMP',-
> next_date=>sysdate+0.5,-
> interval=>'sysdate+1');

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>


Here, I have built two MVs and then a Refresh Group called "HR_MVS".  The first refresh will be 12hours from now and every subsequent refresh will be after 24hours.  (The Refresh Interval must be set to what would be larger than the time taken to execute the actual Refresh).

However, I can manually execute the Refresh after new rows are populated into the source tables. First, I insert new rows

SQL> show user
USER is "HR"
SQL> insert into departments (department_id, department_name)
2 values
3 (departments_seq.nextval, 'New Department');

1 row created.

SQL> select department_id
2 from departments
3 where department_name = 'New Department';

DEPARTMENT_ID
-------------
280

SQL> insert into employees(employee_id, first_name, last_name, email, hire_date, job_id, department_id)
2 values
3 (employees_seq.nextval, 'Hemant', 'Chitale', 'hkc@myenterprise.com', sysdate, 'AD_VP', 280);

1 row created.

SQL> select employee_id
2 from employees
3 where first_name = 'Hemant';

EMPLOYEE_ID
-----------
208

SQL> commit;

Commit complete.

SQL>


Now that there are new rows, the target MVs must be refreshed together.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> execute dbms_refresh.refresh('HR_MVS');

PL/SQL procedure successfully completed.

SQL> select count(*) from mv_dept;

COUNT(*)
----------
28

SQL> select count(*) from mv_emp;

COUNT(*)
----------
108

SQL>
SQL> select * from mv_dept
2 where dept_id=280;

DEPT_ID DEPT_NAME
---------- ------------------------------
280 New Department

SQL> select * from mv_emp
2 where emp_id=208;

DEPT_ID EMP_ID FIRST_NAME LAST_NAME HIRE_DATE
---------- ---------- -------------------- ------------------------- ---------
280 208 Hemant Chitale 12-OCT-19

SQL>


Both MVs have been Refresh'd together as an ATOMIC Transaction.  If either of the two MVs had failed to refresh (e.g. unable to allocate extent to grow the MV), both the INSERTs would be rolled back.  (Note : It is not a necessary requirement that both source tables have new / updated rows, the Refresh Group works even if only one of the two tables has new / updated rows).

Note : I have used DBMS_REFRESH.REFRESH (instead of DBMS_MVIEW.REFRESH) to execute the Refresh.

You can build multiple Refresh Groups, each consisting of *multiple* Source Tables from the same source database.
You would define each Refresh Group to maintain consistency of data across multiple MVs (sourced from different tables).
Besides the Refresh Group on two HR tables, I could have, within the HEMANT schema, more Refresh Groups on FINANCE schema tables as well.

(Can you have a Refresh Group sourcing from tables from different schemas ?  Try that out !)


What's the downside of Refresh Groups ?    
Undo and Redo !  Every Refresh consists of INSERT/UPDATE/DELETE operations on the MVs.  And if any one of the MVs fails to Refresh, the entire set of DMLs (across all the MVs in the Refresh Group) has to *Rollback* !


Categories: DBA Blogs

Oracle cloud: Login

Dietrich Schroff - Fri, 2019-10-11 15:01
Main problem for login into Oracle cloud is to there is no generic login URL.
Inside the mail Oracle sent after the registration there is a specific URL. Something like:
http://app.response.oracle-mail.com/e/er?elq_mid=920.......But this ends after some seconds at:

https://cloud.oracle.com/en_US/sign-in

There is also another login page, but this one does not work for my setup:

https://login.eu-frankfurt-1.oraclecloud.com/v1/oauth2/authorize


For this one i did not find any documentation at all, so i somebody knows how this login page can be used, please add a comment...

Patroni Operations – switchover and failover

Yann Neuhaus - Fri, 2019-10-11 09:22

In this post we will have a look at switchover and failover of a Patroni cluster. As well as a look at the maintenance mode Patroni offers, which gives the opportunity to prevent from an automatic failover.

Switchover

There are two possibilities to run a switchover, either in scheduled mode or immediately.

1. Scheduled Switchover
postgres@patroni1:/home/postgres/ [PG1] patronictl switchover
Master [patroni1]:
Candidate ['patroni2', 'patroni3'] []: patroni2
When should the switchover take place (e.g. 2019-10-08T11:31 )  [now]: 2019-10-08T10:32
Current cluster topology
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  2 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  2 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  2 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
Are you sure you want to schedule switchover of cluster PG1 at 2019-10-08T10:32:00+02:00, demoting current master patroni1? [y/N]: y
2019-10-08 10:31:14.89236 Switchover scheduled
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  2 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  2 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  2 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
 Switchover scheduled at: 2019-10-08T10:32:00+02:00
                    from: patroni1
                      to: patroni2
postgres@patroni1:/home/postgres/ [PG1]

That’s it. At the given time, the switchover will take place. All you see in the logfile is an entry like this

Oct  8 10:32:00 patroni1 patroni: 2019-10-08 10:32:00,006 INFO: Manual scheduled failover at 2019-10-08T10:32:00+02:00
Oct  8 10:32:00 patroni1 patroni: 2019-10-08 10:32:00,016 INFO: Got response from patroni2 http://192.168.22.112:8008/patroni: {"database_system_identifier": "6745341072751547355", "postmaster_start_time": "2019-10-08 10:09:40.217 CEST", "timeline": 2, "cluster_unlocked": false, "patroni": {"scope": "PG1", "version": "1.6.0"}, "state": "running", "role": "replica", "xlog": {"received_location": 83886560, "replayed_timestamp": null, "paused": false, "replayed_location": 83886560}, "server_version": 110005}
Oct  8 10:32:00 patroni1 patroni: 2019-10-08 10:32:00,113 INFO: manual failover: demoting myself
Oct  8 10:32:01 patroni1 patroni: 2019-10-08 10:32:01,256 INFO: Leader key released
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03,271 INFO: Local timeline=2 lsn=0/6000028
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03,279 INFO: master_timeline=3
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03,281 INFO: master: history=1#0110/5000098#011no recovery target specified
Oct  8 10:32:03 patroni1 patroni: 2#0110/6000098#011no recovery target specified
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03,282 INFO: closed patroni connection to the postgresql cluster
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03,312 INFO: postmaster pid=11537
Oct  8 10:32:03 patroni1 patroni: 192.168.22.111:5432 - no response
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03.325 CEST - 1 - 11537 -  - @ - 0LOG:  listening on IPv4 address "192.168.22.111", port 5432
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03.328 CEST - 2 - 11537 -  - @ - 0LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03.339 CEST - 3 - 11537 -  - @ - 0LOG:  redirecting log output to logging collector process
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03.339 CEST - 4 - 11537 -  - @ - 0HINT:  Future log output will appear in directory "pg_log".
Oct  8 10:32:04 patroni1 patroni: 192.168.22.111:5432 - accepting connections
Oct  8 10:32:04 patroni1 patroni: 192.168.22.111:5432 - accepting connections
Oct  8 10:32:04 patroni1 patroni: 2019-10-08 10:32:04,895 INFO: Lock owner: patroni2; I am patroni1
Oct  8 10:32:04 patroni1 patroni: 2019-10-08 10:32:04,895 INFO: does not have lock
Oct  8 10:32:04 patroni1 patroni: 2019-10-08 10:32:04,896 INFO: establishing a new patroni connection to the postgres cluster
2. Immediate switchover

Here you start the same way as for planned switchover, but the switchover will take place immediatelly.

postgres@patroni1:/home/postgres/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 |        | running |  1 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 | Leader | running |  1 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  1 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
postgres@patroni1:/home/postgres/ [PG1] patronictl switchover
Master [patroni2]:
Candidate ['patroni1', 'patroni3'] []: patroni1
When should the switchover take place (e.g. 2019-10-08T11:09 )  [now]:
Current cluster topology
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 |        | running |  1 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 | Leader | running |  1 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  1 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
Are you sure you want to switchover cluster PG1, demoting current master patroni2? [y/N]: y
2019-10-08 10:09:38.88046 Successfully switched over to "patroni1"
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  1 |           |
|   PG1   | patroni2 | 192.168.22.112 |        | stopped |    |   unknown |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  1 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
postgres@patroni1:/home/postgres/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  2 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  2 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  2 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
postgres@patroni1:/home/postgres/ [PG1]
Failover

In difference to the switchover, the failover is executed automatically, when the Leader node is getting unavailable for unplanned reason.
You can only adjust some database parameter to affect the failover.

The parameters for failover arre also managed using patronictl. But they are not in the parameter section, they are above. so let’s say, we adjust one parameter and add one paramter to not use the default anymore.

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl edit-config
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl edit-config
---
+++
@@ -1,5 +1,6 @@
-loop_wait: 7
+loop_wait: 10
 maximum_lag_on_failover: 1048576
+master_start_timeout: 240
 postgresql:
   parameters:
     archive_command: /bin/true

Apply these changes? [y/N]: y
Configuration changed

Afterwards there is no need to restart the database. Changes take affect immediately. So the failover can be configured according to every special need. A list of all possible parameter changes can be found here .

Maintenance mode

In some cases it is necessary to do maintenance on a single node and you do not want Patroni to manage the cluster. This can be needed for e.g. release updates.
When Patroni paused, it won’t change the state of PostgeSQL. For example it will not try to start the cluster when it is stopped.

So let’s do an example. We will pause the cluster, stop the replica, upgrade from 9.6.8 to 9.6.13 and afterwards start the replica again. In case we do not pause the replica, the database will be started automatically by Patroni.

postgres@patroni1:/home/postgres/ [PG1] patronictl pause
Success: cluster management is paused
You have new mail in /var/spool/mail/opendb
postgres@patroni1:/home/postgres/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  2 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  2 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  2 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
 Maintenance mode: on

On the replica

postgres@patroni2:/home/postgres/ [PG1] pg_ctl stop -D /u02/pgdata/96/PG1/ -m fast

postgres@patroni2:/home/postgres/ [PG1] export PATH= /u01/app/postgres/product/PG96/db_13/bin:$PATH
postgres@patroni2:/home/postgres/ [PG1] export PORT=5432
postgres@patroni2:/home/postgres/ [PG1] which pg_ctl
/u01/app/opendb/product/PG96/db_13/bin/pg_ctl

postgres@patroni2:/home/postgres/ [PG1] pg_ctl -D /u02/pgdata/96/PG1 start
server starting
postgres@patroni2:/home/postgres/ [PG1] 2019-10-08 17:25:28.358 CEST - 1 - 23192 -  - @ - 0LOG:  redirecting log output to logging collector process
2019-10-08 17:25:28.358 CEST - 2 - 23192 -  - @ - 0HINT:  Future log output will appear in directory "pg_log".

postgres@patroni2:/home/postgres/ [PG1] psql -c "select version()" postgres
                                                           version
------------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.13 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bit
(1 row)

postgres@patroni2:/home/postgres/ [PG1] patronictl resume
Success: cluster management is resumed

postgres@patroni2:/home/postgres/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  5 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  5 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  5 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+

You can do this on the other nodes as well.

Conclusion

Switchover is quite easy and for all the test I did so far it was really reliable. As well as the failover, here you just have to think about adjusting the parameters to your needs. Not in every case it is the best solution to wait 5 min for a failover.

Cet article Patroni Operations – switchover and failover est apparu en premier sur Blog dbi services.

v$session

Jonathan Lewis - Fri, 2019-10-11 06:29

Here’s an odd, and unpleasant, detail about querying v$session in the “most obvious” way. (And if you were wondering what made me resurrect and complete a draft on “my session id” a couple of days ago, this posting is the reason). Specifically if you want to select some information for your own session from v$session the query you’re likely to use in any recent version of Oracle will probably be of the form:


select {list for columns} from v$session where sid = to_number(sys_context('userenv','sid'));

Unfortunately that one little statement hides two anomalies – which you can see in the execution plan. Here’s a demonstration cut from an SQL*Plus session running under 19.3.0.0:


SQL> select * from table(dbms_xplan.display_cursor);

SQL_ID  gcfrzq9knynj3, child number 0
-------------------------------------
select program from V$session where sid = sys_context('userenv','sid')

Plan hash value: 2422122865

----------------------------------------------------------------------------------
| Id  | Operation                 | Name            | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                 |       |       |     1 (100)|
|   1 |  MERGE JOIN CARTESIAN     |                 |     1 |    33 |     0   (0)|
|   2 |   NESTED LOOPS            |                 |     1 |    12 |     0   (0)|
|*  3 |    FIXED TABLE FULL       | X$KSLWT         |     1 |     8 |     0   (0)|
|*  4 |    FIXED TABLE FIXED INDEX| X$KSLED (ind:2) |     1 |     4 |     0   (0)|
|   5 |   BUFFER SORT             |                 |     1 |    21 |     0   (0)|
|*  6 |    FIXED TABLE FULL       | X$KSUSE         |     1 |    21 |     0   (0)|
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
------ -------------------------------------------
   3 - filter("W"."KSLWTSID"=TO_NUMBER(SYS_CONTEXT('userenv','sid')))
   4 - filter("W"."KSLWTEVT"="E"."INDX")
   6 - filter((BITAND("S"."KSUSEFLG",1)<>0 AND BITAND("S"."KSSPAFLG",1)<>0 AND 
              "S"."INDX"=TO_NUMBER(SYS_CONTEXT('userenv','sid'))
              AND INTERNAL_FUNCTION("S"."CON_ID") AND "S"."INST_ID"=USERENV('INSTANCE')))

As you can see, v$session is a join of 3 separate structures – x$kslwt (v$session_wait), x$ksled (v$event_name), and x$ksuse (the original v$session as it was some time around 8i), and the plan shows two “full tablescans” and a Cartesian merge join. Tablescans and Cartesian merge joins are not necessarily bad – especially where small tables and tiny numbers of rows are concerned – but they do merit at least a brief glance.

x$ksuse is a C structure in the fixed SGA and that structure is a segmented array (which seems to be chunks of 126 entries in 19.3, and chunks of 209 entries in 12.2 – but that’s fairly irrelevant). The SID is simply the index into the array counting from 1, so if you have a query with a predicate like ‘SID = 99’ Oracle can work out the address of the 99th entry in the array and access it very quickly – which is why the SID column is reported as a “fixed index” column in the view v$indexed_fixed_column.

But we have two problems immediately visible:

  1. the optimizer is not using the “index” to access x$ksuse despite the fact that we’re giving it exactly the value we want to use (and we can see a suitable predicate at operation 6 in the plan)
  2. the optimizer has decided to start executing the query at the x$kslwt table

Before looking at why thing’s have gone wrong, let’s check the execution plan to see what would have happened if we’d copied the value from the sys_context() call into a bind variable and queried using the bind variable – which we’ll keep as a character type to make it a fair comparison:

SQL_ID  cm3ub1tctpdyt, child number 0
-------------------------------------
select program from v$session where sid = to_number(:v1)

Plan hash value: 1627146547

----------------------------------------------------------------------------------
| Id  | Operation                 | Name            | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                 |       |       |     1 (100)|
|   1 |  MERGE JOIN CARTESIAN     |                 |     1 |    32 |     0   (0)|
|   2 |   NESTED LOOPS            |                 |     1 |    12 |     0   (0)|
|*  3 |    FIXED TABLE FIXED INDEX| X$KSLWT (ind:1) |     1 |     8 |     0   (0)|
|*  4 |    FIXED TABLE FIXED INDEX| X$KSLED (ind:2) |     1 |     4 |     0   (0)|
|   5 |   BUFFER SORT             |                 |     1 |    20 |     0   (0)|
|*  6 |    FIXED TABLE FIXED INDEX| X$KSUSE (ind:1) |     1 |    20 |     0   (0)|
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("W"."KSLWTSID"=TO_NUMBER(:V1))
   4 - filter("W"."KSLWTEVT"="E"."INDX")
   6 - filter(("S"."INDX"=TO_NUMBER(:V1) AND BITAND("S"."KSSPAFLG",1)<>0
              AND BITAND("S"."KSUSEFLG",1)<>0 AND INTERNAL_FUNCTION("S"."CON_ID") AND
              "S"."INST_ID"=USERENV('INSTANCE')))


When we have a (character) bind variable instead of a sys_context() value the optimizer manages to use the “fixed indexes”., but it’s still started executing at x$kslwt, and still doing a Cartesian merge join. The plan would be the same if the bind variable were a numeric type, and we’d still get the same plan if we replaced the bind variable with a literal number.

So problem number 1 is that Oracle only seems able to use the fixed index path for literal values and simple bind variables (plus a few “simple” functions). It doesn’t seem to use the fixed indexes for most functions (even deterministic ones) returning a value and the sys_context() function is a particular example of this.

Transitivity

Problem number 2 comes from a side-effect of something that I first described about 13 years ago – transitive closure. Take a look at the predicates in both the execution plans above. Where’s the join condition between x$ksuse and x$kslwt ? There should be one, because the underlying SQL defining [g]v$session  has the following joins:

from
      x$ksuse s,
      x$ksled e,
      x$kslwt w
where
      bitand(s.ksspaflg,1)!=0
and   bitand(s.ksuseflg,1)!=0
and   s.indx=w.kslwtsid       -- this is the SID column for v$session and v$session_wait
and   w.kslwtevt=e.indx
 

What’s happened here is that the optimizer has used transitive closure: “if a = b and b = c then a = c” to clone the predicate “s.indx = to_number(sys_context(…))” to “w.kslwtsid = to_number(sys_context(…))”. But at the same time the optmizer has eliminated the predicate “s.indx = w.kslwtsid”, which it shouldn’t do because this is 12.2.0.1 and ever since 10g we’ve had the parameter _optimizer_transitivity_retain = true — but SYS is ignoring the parameter setting.

So we no longer have a join condition between x$ksuse and x$kslwt – which means there has to be a cartesian merge join between them and the only question is whether this should take place before or after the join between x$kslwt and x$ksled. In fact, the order doesn’t really matter because there will be only one row identified in x$kslwt and one row in x$ksuse, and the join to x$ksled is simply a lookup (by undeclarable unique key) to translate an id into a name and it will take place only once whatever we do about the other two structures.

But there is a catch – especially if your sessions parameter is 25,000 (which it shouldn’t be) and the number of connected sessions is currently 20,000 (which it shouldn’t be) – the predicate against x$ksuse does a huge amount of work as it walks the entire array testing every row (and it doesn’t even do the indx test first – it does a couple of bitand() operations). Even then this wouldn’t be a disaster – we’re only talking a couple of hundredths of a second of CPU – until you find the applications that run this query a huge number of times.

We would prefer to avoid two full tablescans since the arrays could be quite large, and of the two it’s the tablescan of x$ksuse that is going to be the greater threat; so is there a way to bypass the threat?  Once we’ve identified the optimizer anomaly we’ve got a pointer to a solution. Transitivity is going wrong, so let’s attack the transitivity. Checking the hidden parameters we can find a parameter: _optimizer_generate_transitive_pred which defaults to true, so let’s set it to false for the query and check the plan:

select  /*+   opt_param('_optimizer_generate_transitive_pred','FALSE')
*/  program from  v$session where  sid = sys_context('userenv','sid')

Plan hash value: 3425234845

----------------------------------------------------------------------------------
| Id  | Operation                 | Name            | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                 |       |       |     1 (100)|
|   1 |  NESTED LOOPS             |                 |     1 |    32 |     0   (0)|
|   2 |   NESTED LOOPS            |                 |     1 |    28 |     0   (0)|
|   3 |    FIXED TABLE FULL       | X$KSLWT         |    47 |   376 |     0   (0)|
|*  4 |    FIXED TABLE FIXED INDEX| X$KSUSE (ind:1) |     1 |    20 |     0   (0)|
|*  5 |   FIXED TABLE FIXED INDEX | X$KSLED (ind:2) |     1 |     4 |     0   (0)|
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(("S"."INDX"="W"."KSLWTSID" AND BITAND("S"."KSSPAFLG",1)<>0
              AND BITAND("S"."KSUSEFLG",1)<>0 AND "S"."INDX"=TO_NUMBER(SYS_CONTEXT('user
              env','sid')) AND INTERNAL_FUNCTION("S"."CON_ID") AND
              "S"."INST_ID"=USERENV('INSTANCE')))
   5 - filter("W"."KSLWTEVT"="E"."INDX")


Although it’s not nice to insert hidden parameters into the optimizer activity we do have a result. We don’t have any filtering on x$kslwt – fortunately this seems to be limited in size (but see footnote) to the number of current sessions (unlike x$ksuse which has an array size defined by the sessions parameter or derived from the processes parameter). For each row in x$kslwt we do an access into x$ksuse using the “index” (note that we don’t see access predicates for the fixed tables, we just have to note the operation says FIXED INDEX and spot the “index-related” predicate in the filter predicate list), so this strategy has reduced the number of times we check the complex predicate on x$ksuse rows.

It’s still far from ideal, though. What we’d really like to do is access x$kslwt by index using the known value from sys_context(‘userenv’,’sid’). As it stands the path we get from using a hidden parameter which isn’t listed as legal for the opt_param() hint is a plan that we would get if we used an unhinted query that searched for audsid = sys_context(‘userenv’,’sessionid’).

SQL_ID  7f3f9b9f32u7z, child number 0
-------------------------------------
select  program from  v$session where  audsid =
sys_context('userenv','sessionid')

Plan hash value: 3425234845

----------------------------------------------------------------------------------
| Id  | Operation                 | Name            | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                 |       |       |     1 (100)|
|   1 |  NESTED LOOPS             |                 |     2 |    70 |     0   (0)|
|   2 |   NESTED LOOPS            |                 |     2 |    62 |     0   (0)|
|   3 |    FIXED TABLE FULL       | X$KSLWT         |    47 |   376 |     0   (0)|
|*  4 |    FIXED TABLE FIXED INDEX| X$KSUSE (ind:1) |     1 |    23 |     0   (0)|
|*  5 |   FIXED TABLE FIXED INDEX | X$KSLED (ind:2) |     1 |     4 |     0   (0)|
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter(("S"."INDX"="W"."KSLWTSID" AND BITAND("S"."KSSPAFLG",1)<>0
              AND BITAND("S"."KSUSEFLG",1)<>0 AND "S"."KSUUDSES"=TO_NUMBER(SYS_CONTEXT('
              userenv','sessionid')) AND INTERNAL_FUNCTION("S"."CON_ID") AND
              "S"."INST_ID"=USERENV('INSTANCE')))
   5 - filter("W"."KSLWTEVT"="E"."INDX")


The bottom line, then, seems to be that if you need a query by SID against v$session to be as efficient as possible then your best bet is to load a numeric variable with the sys_context(‘userenv’,’sid’) and then select where “sid = :bindvariable”.  Otherwise query by audsid, or use a hidden parameter to affect the optimizer.

Until the the SYS schema follows the _optimizer_transitivity_retain parameter or treats sys_context() the same way it treats a bind variable there is always going to be some unnecessary work when querying v$session and that excess will grow with either the number of connected sessions (if you optimize the query) or with the value of the sessions parameter.

Footnote

In (much) older versions of Oracle v$session_wait sat on top of x$ksusecst, which was part of the same C structure as x$ksuse. In newer versions of Oracle x$kslwt is a structure that is created on demand in the PGA/UGA – I hope that there’s a short cut that allows Oracle to find the waiting elements in x$ksuse[cst] efficiently, rather than requiring a walk through the whole thing, otherwise a tablescan of the (nominally smaller) x$kslwt structure will be at least as expensive as a tablescan of the x$ksuse structure.

Update (just a few minutes after posting)

Bob Bryla has pointed out in a tweet that there are many “bugs” not fixed until 19.1 for which the workaround is to set “_optimizer_transitivity_retain” to false. So maybe this isn’t an example of SYS doing something particularly strange – it may be part of a general reworking of the mechanism that still has a couple of undesirable side effects.

Bob’s comment prompted me to clone the x$ tables into real tables in a non-SYS schema and model the fixed indexes with primary keys, and I found that the resulting plan (though very efficient) still discarded the join predicate. So we may be seeing the side effects of a code enhancement relating to generating predicates that produce unique key access paths. (A “contrary” test, the one in the 2013 article I linked to, still retains the join predicate for the query that has non-unique indexes.)

 

Show null for switch items

Jeff Kemp - Fri, 2019-10-11 02:51

An application I maintain needed a checklist feature added. I wanted to show a “Yes / No” switch for a list of checklist items. Initially, when the record is created, the checklist is populated with the questions along with a NULL for the response.

I generated the switches in an ordinary Classic report using code like this:

select r.name as risk_category
      ,apex_item.switch
         (p_idx        => 10
         ,p_value      => i.response
         ,p_on_value   => 'Yes'
         ,p_on_label   => 'Yes'
         ,p_off_value  => 'No'
         ,p_off_label  => 'No'
         ,p_item_id    => 'RESPONSE_' || rownum
         ,p_item_label => i.risk_category_code || '-' || i.rci_fk
         ,p_attributes => 'data-risk="' || i.risk_category_code || '"'
         )
       ||apex_item.hidden(p_idx => 11, p_value => i.rci_fk)
       as response
      ,i.question_text
from supplier_risk_checklist_items i
join risk_categories r on r.code = i.risk_category_code
where i.sri_fk = :P10_ID
order by r.sort_order nulls last, i.sort_order nulls last, i.rci_fk

I’ve used p_idx values of 10 and 11 in order to avoid conflicting with another tabular report on this particular page. The “response” column in the report has CSS Classes set to responseSwitch (this becomes useful later when we want to write javascript targeting just these items and nothing else on the page) and its Escape special characters attribute is set to No. The report when run looks like this:

Some of the responses are “Yes”, some “No”, and some are NULL (unanswered).

The problem is that all the NULL responses are indistinguishable from the “No” responses. If the user clicks “Yes” or “No”, the response is saved correctly – but the user cannot tell which ones haven’t explicitly been answered yet.

To find a solution for this issue I started by examining the HTML being generated for each question. I noticed that the input option for the “No” value was marked as “checked”, while the hidden input item had no “value” on it. These were the ones that needed fixing.

Example 1. Notice that the displayed radio button RESPONSE_10_N is “checked”, but the associated hidden input RESPONSE_10 has no value attribute. Example 2. In this example, the displayed radio button RESPONSE_5_N is “checked”, but that’s ok because the hidden input RESPONSE_5 has the value “No” – so we don’t want to change this one.

In the page’s Execute When Page Loads, I search for all instances of responseSwitch where the hidden input item does not have a value attribute; in each case, I find the associated input item that shows “No” and unset the “checked” property:

// workaround for generated switch items showing "No" when value is null
// search for the hidden input items without a value (i.e. null on the database)
$(".responseSwitch input[name='f10']:not([value])").each(function(i){
    var id = $(this).attr("id");
    // these will have "checked" on the "No" option; remove it
    $(".responseSwitch input#"+id+"_N").prop("checked",null);
});

This makes it clear to the user which checklist items have been answered so far, and which ones haven’t.

Note: the user is given no way to unset an answer once it has been saved; if this were a problem I would change this to use an ordinary Select list item instead of a Switch item.

Pages

Subscribe to Oracle FAQ aggregator