Feed aggregator

Create materialized view with ENABLE QUERY REWRITE for SDO_RELATE getting ORA-30373

Tom Kyte - Fri, 2019-11-08 02:48
I write a query like this: <code>CREATE MATERIALIZED VIEW MV REFRESH FORCE ON DEMAND start with (sysdate) next (sysdate+1/1440) ENABLE QUERY REWRITE AS SELECT O.ID DIREK_ID, MAX(LK.ADI) ISLETME_GERILIMI FROM xxx O, yyy AA, SY...
Categories: DBA Blogs

Copy row value down until next non-null row

Tom Kyte - Fri, 2019-11-08 02:48
Hi, Is there a simple way to achieve the following output in SQL? Wanted to populate the null values with the Country name until there is a new country name. Thanks with data as ( select 'Afghanistan Afghani' as country_currency, N...
Categories: DBA Blogs

Problem with very large varchar2 sorting in oracle 10g r2? Sorry, I can't verify my problem in LiveSQL because I need to run it in Oracle 10g r2

Tom Kyte - Fri, 2019-11-08 02:48
I am very sorry to waste your time with something that is probably VERY SIMPLE, but IT IS something I cannot understand ... I must be obfuscated <code>select level, lpad(to_char(level), 1000) from dual connect by level<=20 order by 2;</code> ...
Categories: DBA Blogs

Change column to row

Tom Kyte - Fri, 2019-11-08 02:48
I have a data like below in a table <code>create table test ( sr varchar2(1),col1 number,col2 number,col3 number ) ; insert into test values ('a',1,2,3); insert into test values ('b',4,5,6); insert into test values ('c',7,8,9);</code> Want...
Categories: DBA Blogs

Connection pooling with PgBouncer

Yann Neuhaus - Thu, 2019-11-07 09:38

Some of you may know the case: As soon as the number of users grow, the number of resource problems increases. Have you ever thought about using a connection pooler? Too complex, too much administration effort? In this post I like to explain, how the connection pooler can help you with your memory, as well as showing you the simplicity of setup connection pooling with PgBouncer.

Introduction

By default PostgreSQL forks it’s main process to child processes for every new connection. In the course of time this can lead to more and more processes on the server. On one hand, this is pretty cool, because it can provide more stability and a nice view of resource utilization per connection. But if there are many short time connections, the disadvantages will predominate. The more connections you have, the more RAM will be used.
The solution for that problem can be a connection pooler like PgBouncer, an opensource connection pooling middleware espacially designed for Postgres. It will act like a wrapper around the database connections. It has the internals for the connection between the database and the pool, but everything is hidden from the application that connects.

Installation

For the installation of pgBouncer you can decide between two possibilities
1. using yum
2. building from git (https://pgbouncer.github.io/install.html#building-from-git)

To keep it simple, we go on with method 1.

postgres@centos-mini:/u02/pgdata/11/PG1/ [PG1] sudo yum install pgbouncer
Loaded plugins: fastestmirror
Determining fastest mirrors
epel/x86_64/metalink                                                                |  28 kB  00:00:00
 * base: pkg.adfinis-sygroup.ch
 * epel: pkg.adfinis-sygroup.ch
 * extras: pkg.adfinis-sygroup.ch
 * updates: pkg.adfinis-sygroup.ch
base                                                                                | 3.6 kB  00:00:00
epel                                                                                | 5.3 kB  00:00:00
extras                                                                              | 2.9 kB  00:00:00
ius                                                                                 | 1.3 kB  00:00:00
pgdg10                                                                              | 3.6 kB  00:00:00
pgdg11                                                                              | 3.6 kB  00:00:00
updates                                                                             | 2.9 kB  00:00:00
(1/10): base/7/x86_64/group_gz                                                      | 165 kB  00:00:06
(2/10): extras/7/x86_64/primary_db                                                  | 153 kB  00:00:00
(3/10): epel/x86_64/group_gz                                                        |  90 kB  00:00:06
(4/10): epel/x86_64/primary_db                                                      | 6.9 MB  00:00:08
(5/10): epel/x86_64/updateinfo                                                      | 1.0 MB  00:00:08
(6/10): pgdg11/7/x86_64/primary_db                                                  | 337 kB  00:00:01
(8/10): base/7/x86_64/primary_db                                                    | 6.0 MB  00:00:10
(10/10): updates/7/x86_64/primary_db                                                | 2.8 MB  00:00:01
(11/10): ius/x86_64/primary                                                         | 139 kB  00:00:06
(12/10): pgdg10/7/x86_64/primary_db                                                 | 384 kB  00:00:06
ius                                                                                              631/631
Resolving Dependencies
--> Running transaction check
---> Package pgbouncer.x86_64 0:1.12.0-1.rhel7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===========================================================================================================================================================================================================
 Package                           Arch                                               Version                                                     Repository                                          Size
===========================================================================================================================================================================================================
Installing:
 pgbouncer                         x86_64                                             1.12.0-1.rhel7                                              pgdg10                                             207 k

Transaction Summary
===========================================================================================================================================================================================================
Install  1 Package

Total download size: 207 k
Installed size: 477 k
Is this ok [y/d/N]: y
Downloading packages:
pgbouncer-1.12.0-1.rhel7.x86_64.rpm                                                                                                                                                | 207 kB  00:00:06
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : pgbouncer-1.12.0-1.rhel7.x86_64                                                                                                                                                     1/1
  Verifying  : pgbouncer-1.12.0-1.rhel7.x86_64                                                                                                                                                     1/1

Installed:
  pgbouncer.x86_64 0:1.12.0-1.rhel7

Complete!
Configuration pgbouncer.ini

PgBouncer has one central congfiguration file called pgbouncer.ini, per default it is located under /etc/pgbouncer and it is used to configure the PgBouncer pool.
You can define a lot of parameters in this file, most of them commented out by default and you can start with a very minimal configuration. It includes generic settings as logile, listen_addr, listen_port… as well as connectivity, console access control, admin_users and many, many more. The full list of parameters can be found in the pgbouncer documentation, which is really detailled and gives you a good overview.
Lets have a look to a easy sample of pgbouncer.ini file

cat /etc/pgbouncer/pgbouncer.ini
[databases]
bouncer= host=localhost dbname=bouncer

[pgbouncer]
listen_addr=127.0.0.1
listen_port= 6432
auth_type = md5
auth_file= /etc/pgbouncer/userlist.txt
admin_users=bounce
pool_mode=session
max_client_conn = 100
default_pool_size = 20
logfile = /etc/pgbouncer/log/pgbouncer.log
pidfile = /etc/pgbouncer/pgbouncer.pid

The ini-file has two sections, first the [database] section defines the alias(es) for the database(s). Just start a new line for every database. You can also define user and port etc. and afterwards the [pgbouncer] section, where the pool configuration is done. You can also start with a very minimal configuration.

One of the most important parameters is pool_mode, which defines how a server connection can be reused. 3 modes can be defined:
session: This is the default value: Connections are released back to the pool after disconnection.
transaction: Releases the connection to the pool once a transaction finishes.
statement: After a query finishes, the connection is released back to he pool.

The other parameters in section pgbouncer shortly explained:
listen_addr: List of addresses where to listen for TCP connection.
listen_port: Listening port
admin_users: Users from the auth_file which get access to a special pgbouncer database. This database provides performance-related information about PgBouncer.
max_client_conn: This is the maximum number of client connections allowed. The default value is 100, but there is also a formula to calculate the value
default_pool_size: The number of server connections allowed per user/database pair. The default value is 20.
logfile: This one is self-explaining. The log file location.
pidfile: The location of the PID file.
auth_type and auth_file: This two belong together. Auth_type specifies how to authenticate users (pam|hba|md5) against PgBouncer and auth_file contains the usernames and passwords.

userlist.txt

As already mentioned this file has a very simple structure, username and password. You can either write the password in plain text or the MD5 has of the password. Of course it is not very secure to put the plain text password in here.

cat /etc/pgbouncer/userlist.txt
"bounce" "md51db1c086e81505132d1834e06d07420e"
"postgres" "md53175bce1d3201d16594cebf9d7eb3f9d"
Start PgBouncer

Now all the configuration is done and PgBouncer can be started. It is possible to start PgBouncer on command line and you see the log output directly:

postgres@centos-mini:/etc/pgbouncer/ [PG1] /bin/pgbouncer pgbouncer.ini
2019-11-06 19:40:05.862 CET [13498] LOG kernel file descriptor limit: 1024 (hard: 4096); max_client_conn: 100, max expected fd use: 192
2019-11-06 19:40:05.864 CET [13498] LOG listening on 127.0.0.1:16432
2019-11-06 19:40:05.864 CET [13498] LOG listening on unix:/tmp/.s.PGSQL.16432
2019-11-06 19:40:05.864 CET [13498] LOG process up: PgBouncer 1.12.0, libevent 2.0.21-stable (epoll), adns: libc-2.17, tls: OpenSSL 1.0.2k-fips  26 Jan 2017
2019-11-06 19:41:05.868 CET [13498] LOG stats: 0 xacts/s, 0 queries/s, in 0 B/s, out 0 B/s, xact 0 us, query 0 us, wait 0 us
2019-11-06 19:41:39.325 CET [13498] LOG C-0x18da360: db1/user1@127.0.0.1:58648 login attempt: db=db1 user=user1 tls=no
2019-11-06 19:41:39.326 CET [13498] LOG C-0x18da360: db1/user1@127.0.0.1:58648 closing because: client unexpected eof (age=0s)
2019-11-06 19:41:47.577 CET [13498] LOG C-0x18da360: db1/user1@127.0.0.1:58652 login attempt: db=db1 user=user1 tls=no
2019-11-06 19:41:47.579 CET [13498] LOG S-0x18e0c30: db1/user1@[::1]:5432 new connection to server (from [::1]:37654)
2019-11-06 19:42:05.869 CET [13498] LOG stats: 0 xacts/s, 0 queries/s, in 0 B/s, out 0 B/s, xact 0 us, query 0 us, wait 157 us
2019-11-06 19:43:05.872 CET [13498] LOG stats: 0 xacts/s, 0 queries/s, in 0 B/s, out 2 B/s, xact 1522 us, query 1522 us, wait 0 us

Furthermore it is possible to create a service which starts automatically in the background after every reboot:

cat /etc/systemd/system/pgbouncer.service
[Unit]
Description=A lightweight connection pooler for PostgreSQL
After=syslog.target
After=network.target

[Service]
Type=simple

User=postgres
Group=postgres

# Path to the init file
Environment=BOUNCERCONF=/etc/pgbouncer/pgbouncer.ini

PIDFile=/var/run/pgbouncer/pgbouncer.pid

# Where to send early-startup messages from the server
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog

ExecStart=/bin/pgbouncer ${BOUNCERCONF}
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGINT

# Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=300

[Install]
WantedBy=multi-user.target
postgres@centos-mini:/etc/ [PG1] sudo systemctl start pgbouncer
postgres@centos-mini:/etc/ [PG1] sudo systemctl status pgbouncer
● pgbouncer.service - A lightweight connection pooler for PostgreSQL
   Loaded: loaded (/etc/systemd/system/pgbouncer.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-11-07 15:17:09 CET; 4s ago
 Main PID: 17298 (pgbouncer)
   CGroup: /system.slice/pgbouncer.service
           └─17298 /bin/pgbouncer /etc/pgbouncer/pgbouncer.ini

Nov 07 15:17:09 centos-mini systemd[1]: Started A lightweight connection pooler for PostgreSQL.
Nov 07 15:17:09 centos-mini pgbouncer[17298]: 2019-11-07 15:17:09.127 CET [17298] LOG kernel file descriptor limit: 1024 (hard: 4096); max_client_conn: 100, max expected fd use: 172
Nov 07 15:17:09 centos-mini pgbouncer[17298]: 2019-11-07 15:17:09.127 CET [17298] LOG listening on 127.0.0.1:6432
Nov 07 15:17:09 centos-mini pgbouncer[17298]: 2019-11-07 15:17:09.127 CET [17298] LOG listening on unix:/tmp/.s.PGSQL.6432
Nov 07 15:17:09 centos-mini pgbouncer[17298]: 2019-11-07 15:17:09.127 CET [17298] LOG process up: PgBouncer 1.12.0, libevent 2.0.21-stable (epoll), adns: libc-2.17, tls: OpenSSL 1.0.2k-fips  26 Jan 2017
15:17:13 postgres@centos-mini:/etc/ [PG1]

PgBouncer is running now and you can connect to the database using PgBouncer.

postgres@centos-mini:/etc/ [PG1] psql -U bounce -p 6432 -h localhost bouncer
Password for user bounce:
psql (11.4 dbi services build)
Type "help" for help.
bouncer=>
bouncer=>
Monitoring

I already mentioned the admin users before and want to explain them a little bit more detailed now.
PgBouncer allows users with admin rights to connect to the virtual database “pgbouncer”. You can use this database to see who is connecting, how many active connections are in a pool and of course many more. As a good starting point, use “SHOW HELP” as soon as you are connected.

postgres@centos-mini:/etc/ [PG1] psql -U bounce -p 6432 -h localhost pgbouncer
Password for user bounce:
psql (11.4 dbi services build, server 1.12.0/bouncer)
Type "help" for help.
pgbouncer=# SHOW HELP;
NOTICE:  Console usage
DETAIL:
        SHOW HELP|CONFIG|DATABASES|POOLS|CLIENTS|SERVERS|USERS|VERSION
        SHOW FDS|SOCKETS|ACTIVE_SOCKETS|LISTS|MEM
        SHOW DNS_HOSTS|DNS_ZONES
        SHOW STATS|STATS_TOTALS|STATS_AVERAGES|TOTALS
        SET key = arg
        RELOAD
        PAUSE []
        RESUME []
        DISABLE 
        ENABLE 
        RECONNECT []
        KILL 
        SUSPEND
        SHUTDOWN
SHOW
pgbouncer=#
pgbouncer=# SHOW POOLS;
 database  |   user    | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait | maxwait_us | pool_mode
-----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------+------------+-----------
 bouncer   | bounce    |         2 |          0 |         0 |       0 |       1 |         0 |        0 |       0 |          0 | session
 pgbouncer | pgbouncer |         1 |          0 |         0 |       0 |       0 |         0 |        0 |       0 |          0 | statement
(2 rows)

pgbouncer=# SHOW CLIENTS;
 type |  user  | database  | state  |   addr    | port  | local_addr | local_port |      connect_time       |      request_time       | wait | wait_us | close_needed |    ptr    | link | remote_pid | tls
------+--------+-----------+--------+-----------+-------+------------+------------+-------------------------+-------------------------+------+---------+--------------+-----------+------+------------+-----
 C    | bounce | bouncer   | active | 127.0.0.1 | 40322 | 127.0.0.1  |       6432 | 2019-11-07 15:24:40 CET | 2019-11-07 15:24:40 CET |    0 |       0 |            0 | 0x1bd9598 |      |          0 |
 C    | bounce | bouncer   | active | 127.0.0.1 | 40332 | 127.0.0.1  |       6432 | 2019-11-07 15:25:12 CET | 2019-11-07 15:25:12 CET |    0 |       0 |            0 | 0x1bd97b0 |      |          0 |
 C    | bounce | pgbouncer | active | 127.0.0.1 | 40314 | 127.0.0.1  |       6432 | 2019-11-07 15:21:50 CET | 2019-11-07 15:25:36 CET |  221 |  440169 |            0 | 0x1bd9380 |      |          0 |
(3 rows)
Conclusion

The above configuration is only a simple example how the configuration can look like. Of course there are many more specifics you can define. PgBouncer is a great tools for connection pooling and can help you to scale down the memory usage of your server. The connections of a pool are always available and in contrast to forking processes, it does not need reach out the database to establish a connection every time. The connections are just there.

Cet article Connection pooling with PgBouncer est apparu en premier sur Blog dbi services.

Brazilian Luxury Hotel Uses Oracle to Raise the Bar on Guest Experiences

Oracle Press Releases - Thu, 2019-11-07 07:00
Press Release
Brazilian Luxury Hotel Uses Oracle to Raise the Bar on Guest Experiences B Hotel Brasília personalizes guest services and reporting with Oracle Hospitality solutions

Redwood Shores, Calif.—Nov 7, 2019

B Hotel Brasília, an independent luxury hotel located on Brasília’s spectacular Eixo Monumental stretch, has deployed an integrated suite of Oracle Hospitality and Oracle Food and Beverage cloud solutions to modernize its operations across the property and offer guests a stellar stay. 

“Our search for technology partners naturally led us to Oracle because of its industry recognition and the solution’s ability to meet our requirements,” said Ana Paula Ernesto, CEO, B Hotel Brasília. “Transitioning to cloud technology enables our staff to focus less on managing technology and more on curating the luxury experiences our guests demand. With a single view of operations across our hotel and dining facilities we can ensure guests are recognized across touchpoints while providing management with visibility into property performance.”  

Oracle Hospitality OPERA Property provides B Hotel Brasília, which offers 306 rooms and a rooftop bar and pool overlooking Brasília’s skyline, with a single view of every guest touchpoint from reservations and check-in to dining. The technology arms hotel personnel with detailed guest information allowing them to deliver more personalized services. In addition, OPERA enables hotel management to prepare and deliver a variety of reports for management including arrivals, departures, revenue and ratings – all from an easy-to-use dashboard view.

The Oracle Food and Beverage MICROS Simphony Cloud Point of Sale system for restaurants was implemented across the hotel’s dining venues including B Restaurant, Bar 16 and the Lobby Bar, offering wait staff the ability to take orders from mobile devices and deliver faster and more accurate food and beverage service to diners. 

“With our Oracle hospitality and food and beverage cloud solutions, we are uniquely positioned to serve as a strategic partner to both independent hotels and global chains that want to increase guest satisfaction, streamline operations and increase on-property revenues,” said Tanya Pratt, Associate Vice President, Oracle Hospitality. “With B Hotel Brasília’s digital transformation they have the power to create a competitive advantage with stellar service that can increase the property’s recognition globally.”

Watch the B Hotel Brasília video to learn how Oracle Hospitality Solutions are helping to drive the property’s growth.

embedBrightcove('responsive', false, 'single', '6099527912001');



Learn more about Oracle’s Hospitality Solutions here.

Contact Info
Christine Allen
Oracle
+1.603.743.4534
christine.allen@oracle.com
About Oracle Hospitality

Oracle Hospitality brings over 40 years of experience in providing technology solutions to independent hoteliers, global and regional chains, gaming, and cruise lines. We provide hardware, software, and services that allow our customers to act on rich data insights that deliver personalized guest experiences, maximize profitability and encourage long-term loyalty. Our solutions include platforms for property management, point-of-sale, distribution, reporting and analytics all delivered from the cloud to lower IT cost and maximize business agility. Oracle Hospitality’s OPERA is recognized globally as the leading property management platform and continues to serve as a foundation for industry innovation. 

For more information about Oracle Hospitality, please visit www.oracle.com/Hospitality 

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Christine Allen

  • +1.603.743.4534

Free Oracle Cloud: 15. The request could not be mapped to any database

Dimitri Gielis - Wed, 2019-11-06 13:25
This post is the last post of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

At some point you might face the following message: "The request could not be mapped to any database":


Oracle is monitoring usage on your Always Free Account and whenever it finds there's no activity for 7 days, it will stop your database automatically. It will preserve the data in the database, but it won't be accessible anymore.

To fix the issue, log in to your Oracle Cloud Account and go to your Autonomous Database:


You will see the database is in a stopped state. Click the Start button:


The state will change to Starting...


And after a minute it becomes available again:


The above behavior is written in the end-user documentation:
Inactivity Monitoring and Database Stoppage
Persistently inactive Always Free Autonomous Databases are detected and handled as follows:
  • After being inactive for 7 days, the database will be stopped automatically, preserving its stored data. Inactivity measurements leading up to 7 days are based on database connections. Successfully making a SQL*Net or HTTPS connection resets these measurements to zero.
  • A database that is automatically or manually stopped and stays inactive for 90 days, cumulative, may be reclaimed and permanently deleted. Inactivity measurements leading up to 90 days are based on the database being inactive or in the stopped state. Starting a stopped database resets these measurements to zero.
    Start an Always Free Autonomous Database by clicking the Start button on the Oracle Cloud Infrastructure console. Start a stopped Always Free Autonomous Database before 90 days to avoid losing access to its data.

But this week there were some people complaining that although they had activity, their database was stopped anyway. I witnessed the same behavior in my account, so I reached out to Oracle and they confirmed their code to identify inactivity, is not properly accounting for APEX/ORDS usage. They are already working on a fix, which they hope to apply very soon. I will update this post when I get confirmation the fix is in the data centers.

Categories: Development

Table Space

Jonathan Lewis - Wed, 2019-11-06 11:59

There’s a request on the Oracle Developer Forum for assistance to write a report that shows the total space allocation for a table – including its indexes and LOB columns. There is no requirement to allow for partitioned tables, but it’s not entirely clear whether the code should allow for index-organized tables and clustered tables, and there’s no comment about nested tables.

The OP has made an attempt to write a suitable query, but the query joins dba_tables to dba_indexes then to dba_lobs then three times to dba_segments (all the joins being outer joins) before aggregating on table name. Unsurprisingly this produces the wrong results because (for example) if a table has two indexes the join to from dba_tables to dba_indexes will double the bytes reported for the table. As Andrew Sayer points out in the thread, it would be better to aggregate on each of the separate component types before joining.

This introduces an important, generic, principle to writing code.

  • If it’s difficult to write a single statement can you break the task down into separate components that are easier to handle.
  • If you can express the problem as a small set of simpler components, can you then combine the components
  • If this approach works is any loss of efficiency balanced by a gain in clarity and maintainability.

In the case of the OP, this isn’t a query that’s going to be runing every few seconds – maybe it will be once per day, maybe three or four times per day. so there’s no need to be worried about making it as efficient as possible – so let’s go for simplicity.

Here’s a query that reports the space allocate for a simple heap table. It references dba_tables and dba_segments, so has to be run by a user with DBA privileges, and at the moment it’s restricted to reporting a single user.

Wait – before I write the query I ought to create a testbed to see if the query is working. So let’s take an empty schema and create a few objects. Let’s create

  • a simple heap table with two indexes and two LOB columns.
  • an Index Organized Table (IOT) with an overflow and two LOB columns.
  • a table with two columns that are a nested table type

Here’s some suitable code:


rem
rem     Script:         sum_table_space.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

prompt  ============================================================
prompt  Simple heap table - copy of all_objects - then add two CLOBs
prompt  ============================================================

create table test_user.t1 as select * from all_objects;

alter table test_user.t1 add (c1 clob) lob (c1) store as basicfile t1_lob1 (disable storage in row);
alter table test_user.t1 add (c2 clob) lob (c2) store as basicfile t1_lob2 (disable storage in row);

update test_user.t1 set c1 = 'x', c2 = 'x' where rownum <= 125;

create index test_user.t1_i1 on test_user.t1(object_name, object_type, object_id);
create index test_user.t1_i2 on test_user.t1(object_type, object_name, object_id);

prompt  =========================================================
prompt  Simple IOT table - with a couple of CLOBs in the overflow
prompt  =========================================================

create table test_user.iot1 (
        id              number          not null,
        ind_pad         varchar2(20)    not null,
        n2              number          not null,
        n3              number          not null,
        padding         varchar2(50),
        ct1             clob,
        ct2             clob,
        constraint iot1_pk primary key (id, ind_pad)
)
organization index
        overflow
        including n3
lob(ct1) store as basicfile iot_lob1(
        disable storage in row
)
lob(ct2) store as basicfile iot_lob2(
        disable storage in row
)
;

insert into test_user.iot1
select
        rownum,
        rpad(rownum,20),
        rownum,
        rownum,
        rpad('x',50,'x'),
        case when mod(rownum,100) = 0 then rpad('x',100,'x') end,
        case when mod(rownum,100) = 0 then rpad('x',100,'x') end
from
        all_objects
where
        rownum <= 10000
;

commit;

prompt  ====================================================
prompt  Now create types to allow creation of a nested table
prompt  ====================================================

create type test_user.jpl_item as object (n1 number, v1 varchar2(32));
/

create type test_user.jpl_tab_type as table of jpl_item;
/

create table test_user.demo_nest (
        id      number,
        nest1   test_user.jpl_tab_type,
        nest2   test_user.jpl_tab_type
)
nested table nest1 store as t_nested1
return as locator
nested table nest2 store as t_nested2
return as locator
;

create unique index test_user.tn1_pk on test_user.t_nested1(nested_table_id, n1) compress 1;
create unique index test_user.tn2_pk on test_user.t_nested2(nested_table_id, n1) compress 1;

create index  test_user.tn1_v1 on test_user.t_nested1(v1);
create index  test_user.tn2_v1 on test_user.t_nested2(v1);

insert into test_user.demo_nest values (
        1,
        test_user.jpl_tab_type(
                test_user.jpl_item(1,'Item One one'),
                test_user.jpl_item(2,'Item One two')
        ),
        test_user.jpl_tab_type(
                test_user.jpl_item(1,'Item One one'),
                test_user.jpl_item(2,'Item One two')
        )
);

insert into test_user.demo_nest values (
        2,
        test_user.jpl_tab_type(
                test_user.jpl_item(1,'Item Two one'),
                test_user.jpl_item(2,'Item Two two')
        ),
        test_user.jpl_tab_type(
                test_user.jpl_item(1,'Item Two one'),
                test_user.jpl_item(2,'Item Two two')
        )
);

commit;

You’ll notice that I’ve prefixed every table, index and type name with a schema name. This is because I set up this test to run as a DBA so I’m creating the objects while connected with a different id.

Of course, before trying to report allocations summed by base table, it would be sensible to produce a simple list of the segments we should see so that we know when we’ve reported all of them. So let’s start with that very simple report:


column bytes format 999,999,999,999
column segment_name format a25

break on report on owner on object_name skip 1
compute sum of bytes on report

select
        segment_name, segment_type, bytes
from
        dba_segments
where
        owner = 'TEST_USER'
order by
        segment_type, segment_name
;

SEGMENT_NAME              SEGMENT_TYPE                  BYTES
------------------------- ------------------ ----------------
IOT1_PK                   INDEX                     1,048,576
SYS_C008380               INDEX                     1,048,576
SYS_C008381               INDEX                     1,048,576
SYS_FK0000075579N00002$   INDEX                     1,048,576
SYS_FK0000075579N00004$   INDEX                     1,048,576
T1_I1                     INDEX                     5,242,880
T1_I2                     INDEX                     5,242,880
TN1_PK                    INDEX                     1,048,576
TN1_V1                    INDEX                     1,048,576
TN2_PK                    INDEX                     1,048,576
TN2_V1                    INDEX                     1,048,576
SYS_IL0000075565C00027$$  LOBINDEX                  1,048,576
SYS_IL0000075565C00028$$  LOBINDEX                  1,048,576
SYS_IL0000075572C00006$$  LOBINDEX                  1,048,576
SYS_IL0000075572C00007$$  LOBINDEX                  1,048,576
IOT_LOB1                  LOBSEGMENT                1,048,576
IOT_LOB2                  LOBSEGMENT                1,048,576
T1_LOB1                   LOBSEGMENT                2,097,152
T1_LOB2                   LOBSEGMENT                2,097,152
T_NESTED1                 NESTED TABLE              1,048,576
T_NESTED2                 NESTED TABLE              1,048,576
DEMO_NEST                 TABLE                     1,048,576
SYS_IOT_OVER_75572        TABLE                     1,048,576
T1                        TABLE                    12,582,912
                                             ----------------
sum                                                47,185,920

So we have a list of segments, and we have a sum of bytes to aim for. One thing you might notice, though, is that the name “IOT1”  has “disappeared”, instead the thing we see as a “TABLE” is its overflow segment, called SYS_IOT_OVER_75572 (the number being the object_id of the table we originally defined.  We will have to do something in our code to translate that sys_iot_over_75572 to iot1 if we want to make our final report easy to comprehend.

There are a few other oddities in the list, but some of them we may be familiar with already – the indexes with names like SYS_IL0000075565C00027$$ are the lob indexes for the four lobs we created. (I deliberately created basicfile LOBs in case the OP wasn’t using securefile LOBs that most systems are now using)  The 75565 in the sample here is the object_id of the base table the C00027 tells us that the LOB is column 27 of the table definition.

You may be wondering about the indexes like SYS_C008380 and SYS_FK0000075579N00002$. We may need to work on those – but I’ll tell you the answer straight away – the former is a unique index on the “nest1” column  in the demo_nest table, the latter is a “foreign key” index on the (invisible) nested_table_id column in the nested tables.

Side note: although the sys_fk0000075579N000025 index looks as if it’s supposed to be a “foreign key” index, as suggested by the name, the dependency, and the function, there’s no declared foreign key constraint declared between the parent table and the nested table. Also, if you check dba_indexes, the generated column has the value ‘N’, even though the name of the index was clearly generated by Oracle internal code.  (The index at the other end of the join – sys_c008380 – is, however, flagged with generated = ‘Y’)

Let’s start building our solution with the easiest bit.


select 
        tab.owner, 
        cast('TABLE' as varchar2(11))   object_type,
        tab.table_name,
        nvl(seg_tab.bytes,0)            bytes  
from
        dba_tables        tab  
left outer join
        dba_segments      seg_tab  
on
        seg_tab.owner      = tab.owner
and     seg_tab.table_name = tab.segment_name  
where
        tab.owner          = 'TEST_USER'
/


OWNER           OBJECT_TYPE TABLE_NAME                      BYTES
--------------- ----------- -------------------- ----------------
TEST_USER       TABLE       T1                         12,582,912
                TABLE       SYS_IOT_OVER_75572          1,048,576
                TABLE       DEMO_NEST                   1,048,576
                TABLE       IOT1                                0
***************                                  ----------------
sum                                                    14,680,064

Immediately we have to correct two problems – we can see the original Iiot1 table – but it doesn’t have an associated data segment, it has a “top”, which is the primary key index segment, and an “overflow” which is the designated by Oracle as a table segment. So we need to check the columns in dba_tables to work out how to get the more appropriate name reported for the table segment, and add a predicate to hide the original name. Here’s “tables only mark 2”:

select 
        tab.owner, 
        cast('TABLE' as varchar2(11))   object_type,
        case
                 when tab.iot_type = 'IOT_OVERFLOW' 
                        then tab.iot_name
                else tab.table_name
        end                             object_name,
        case
                 when tab.iot_type = 'IOT_OVERFLOW' 
                        then tab.table_name
        end                             auxiliary_name,
        nvl(seg_tab.bytes,0)            bytes  
from
        dba_tables        tab  
left outer join
        dba_segments      seg_tab  
on
        seg_tab.owner        = tab.owner
and     seg_tab.table_name   = tab.segment_name  
where
        tab.owner            = 'TEST_USER'
and     nvl(tab.iot_type,'NORMAL') != 'IOT'


OWNER           OBJECT_TYPE OBJECT_NAME          AUXILIARY_NAME                       BYTES
--------------- ----------- -------------------- ------------------------- ----------------
TEST_USER       TABLE       T1                                                   12,582,912
TEST_USER       TABLE       IOT1                 SYS_IOT_OVER_75572               1,048,576
TEST_USER       TABLE       DEMO_NEST                                             1,048,576

I’ve identified the table type using the iot_type column in dba_tables, hidden rows where the iot_type is ‘IOT’, and reported the iot_name (with table_name as an auxiliary name) when the iot_type is ‘IOT_OVERFLOW’. And we can now check that the result is consistent with the content of dba_segments that lists segment_type = ‘TABLE’.

So let’s move on to indexes.

select 
        ind.table_owner                 owner, 
        cast('INDEX' as varchar2(11))   object_type,
        ind.table_name,
        index_name                      auxiliary_name,
        nvl(seg_ind.bytes,0)            bytes  
from
        dba_indexes       ind  
left outer join 
        dba_segments      seg_ind  
on
        ind.owner      = seg_ind.owner
and     ind.index_name = seg_ind.segment_name  
where
        ind.table_owner = 'TEST_USER'


OWNER           OBJECT_TYPE TABLE_NAME           AUXILIARY_NAME                       BYTES
--------------- ----------- -------------------- ------------------------- ----------------
TEST_USER       INDEX       T1                   SYS_IL0000075565C00027$$         1,048,576
TEST_USER       INDEX       T1                   SYS_IL0000075565C00028$$         1,048,576
TEST_USER       INDEX       T1                   T1_I1                            5,242,880
TEST_USER       INDEX       T1                   T1_I2                            5,242,880
TEST_USER       INDEX       IOT1                 IOT1_PK                          1,048,576
TEST_USER       INDEX       IOT1                 SYS_IL0000075572C00007$$         1,048,576
TEST_USER       INDEX       IOT1                 SYS_IL0000075572C00006$$         1,048,576
TEST_USER       INDEX       T_NESTED1            SYS_FK0000075579N00002$          1,048,576
TEST_USER       INDEX       T_NESTED2            SYS_FK0000075579N00004$          1,048,576
TEST_USER       INDEX       DEMO_NEST            SYS_C008380                      1,048,576
TEST_USER       INDEX       DEMO_NEST            SYS_C008381                      1,048,576
TEST_USER       INDEX       T_NESTED1            TN1_PK                           1,048,576
TEST_USER       INDEX       T_NESTED2            TN2_PK                           1,048,576
TEST_USER       INDEX       T_NESTED1            TN1_V1                           1,048,576
TEST_USER       INDEX       T_NESTED2            TN2_V1                           1,048,576

A quick check shows that we’ve picked up the 15 indexes reported by dba_segments – although I’ve labelled them all just as “INDEX” while dba_segments does label LOB indexes differently. As you can see I’ve reported the table name in each case with the index name as the “auxiliary” name. This will be of use when I’m summing up the space associated with each table.

On to the LOBs – also easy:

select 
        lob.owner, 
        cast('LOB' as varchar2(11))   object_type,
        lob.table_name,
        lob.column_name               auxiliary_name,
        nvl(seg_lob.bytes,0)          bytes  
from
        dba_lobs    lob
left outer join 
        dba_segments      seg_lob
on
        seg_lob.owner        = lob.owner
and     seg_lob.segment_name = lob.segment_name
where 
        lob.owner            = 'TEST_USER'


OWNER           OBJECT_TYPE TABLE_NAME           AUXILIARY_NAME                       BYTES
--------------- ----------- -------------------- ------------------------- ----------------
TEST_USER       LOB         T1                   C1                               2,097,152
TEST_USER       LOB         T1                   C2                               2,097,152
TEST_USER       LOB         IOT1                 CT1                              1,048,576
TEST_USER       LOB         IOT1                 CT2                              1,048,576

And finally the nested tables:


select 
        nst.owner, 
        cast('NESTED' as varchar2(11))   object_type,
        nst.parent_table_name            object_name,
        table_name                       auxiliary_name,
        nvl(seg_nst.bytes,0)             bytes  
from
        dba_nested_tables    nst  
left outer join 
       dba_segments          seg_nst  
on
        seg_nst.owner        = nst.owner
and     seg_nst.segment_name = nst.table_name  
where
        nst.owner            = 'TEST_USER'

OWNER           OBJECT_TYPE OBJECT_NAME          AUXILIARY_NAME                       BYTES
--------------- ----------- -------------------- ------------------------- ----------------
TEST_USER       NESTED      DEMO_NEST            T_NESTED1                        1,048,576
TEST_USER       NESTED      DEMO_NEST            T_NESTED2                        1,048,576

A quick check tells us that the four pieces of code produce an exact match for all the segments reported in dba_segments – so all we have to do now is stitch the four queries together with UNION ALL, then sum(bytes) by owner and table_name.

There are 3 little problems to deal with though.

  • We have the predicate “where {owner} = ‘TEST_USER'” appearing 4 times in total, which is a bit messy. If we put this outside the UNION ALL will Oracle be smart enough to push it back inside the UNION ALL as part of its query transformation. It seems to, but the plan (which is several hundred lines long – so I’m not going to look too closely) does change a little, but on my little test system it didn’t make much difference to the performance. Initially it looks as if it might be okay to create a view from this query with no restricition on table owner, and allow users to query the view with a schema name.
  • The code to report indexes reports the indexes on the nested tables under t_nested1 and t_nested2 – we want the indexes to be associated with table demo_nest, so we need to refine that bit of the code. It needs an outer join to dba_nested_tables to supply the parent_table_name if it exists.
  • On my little system the query takes several seconds to run. Should I worry about that. Not until I’ve tested it on a much bigger system, and not until I know how frequently it needs to run. It may be good enough as it stands, and the ease with which I actually modified the first version of my code to handle the nested tables indexing issue is an indicator of the benefits of keeping a complex job as the sum of its parts if it’s reasonable to do so.

Let’s go straight to the final (so far) SQL:


select
        owner, object_name, sum(bytes) 
from    (
        select 
                tab.owner, 
                cast('TABLE' as varchar2(11))   object_type,
                case
                         when tab.iot_type = 'IOT_OVERFLOW' 
                                then tab.iot_name
                        else tab.table_name
                end                             object_name,
                case
                         when tab.iot_type = 'IOT_OVERFLOW' 
                                then tab.table_name
                end                             auxiliary_name,
                nvl(seg_tab.bytes,0)            bytes  
        from
                dba_tables        tab  
        left outer join
                dba_segments      seg_tab  
        on
                seg_tab.owner        = tab.owner
        and     seg_tab.segment_name = tab.table_name
        where
                tab.owner            = 'TEST_USER'
        and     nvl(tab.iot_type,'NORMAL') != 'IOT'
        union all
        select 
                ind.table_owner                  owner, 
                cast('INDEX' as varchar2(11))    object_type,
                nvl(
                        nst.parent_table_name,
                        ind.table_name
                )                                table_name,
                index_name                       auxiliary_name,
                nvl(seg_ind.bytes,0)             bytes  
        from
                dba_indexes       ind  
        left outer join
                dba_nested_tables nst
        on
                nst.owner       = ind.table_owner
        and     nst.table_name  = ind.table_name
        left outer join 
                dba_segments      seg_ind  
        on
                seg_ind.owner         = ind.owner
        and     seg_ind.segment_name  = ind.index_name 
        where
                ind.table_owner = 'TEST_USER'
        union all
        select 
                lob.owner, 
                cast('LOB' as varchar2(11))   object_type,
                lob.table_name,
                lob.column_name               auxiliary_name,
                nvl(seg_lob.bytes,0)          bytes  
        from
                dba_lobs    lob
        left outer join 
                dba_segments      seg_lob
        on
                seg_lob.owner        = lob.owner
        and     seg_lob.segment_name = lob.segment_name
        where 
                lob.owner            = 'TEST_USER'
        union all
        select 
                nst.owner, 
                cast('NESTED' as varchar2(11))   object_type,
                nst.parent_table_name            object_name,
                table_name                       auxiliary_name,
                nvl(seg_nst.bytes,0)             bytes  
        from
                dba_nested_tables    nst  
        left outer join 
               dba_segments          seg_nst  
        on
                seg_nst.owner        = nst.owner
        and     seg_nst.segment_name = nst.table_name
        where
                nst.owner         = 'TEST_USER'
        )
where
        owner = 'TEST_USER'
group by
        owner, object_name
order by
        sum(bytes)
/

OWNER           OBJECT_NAME                SUM(BYTES)
--------------- -------------------- ----------------
TEST_USER       IOT1                        6,291,456
TEST_USER       DEMO_NEST                  11,534,336
TEST_USER       T1                         29,360,128
                                     ----------------
sum                                        47,185,920


I wrote this note because my response to the original query was an informal suggestion of taking this approach; then I thought it might be worth writing a general comment about solving big problems by starting with little problems and stitching the pieces together using this case as a concrete example – then I just went ahead to finish the thing off within the bourndaries of the original requirements.

There’s a reason, though, why I tend to avoid publishing SQL that might be useful – it’s always possible to overlook bits and end up with people trying to use code that’s not totally appropriate to their system. In this case, of course, there’s a total (but deliberate) failure to handle clusters, partitions and subpartitions, but I don’t know how many other of the “less commonly used” bits of Otacle I haven’t considered. I do know that I haven’t touched on domain indexes (such as text indexes with all their convoluted collections of tables and indexes) but there may be some oddities of (e.g.) advanced queueing, replication, and audit vault that fall outside the range of the possibilities I’ve covered above.

Update 7th Nov 2019

The OP from OTN reported seeing some nested table in the output – which I thought I’d catered for so I modified the code I have on my sandbox to add a nested table to the IOT, and added a couple of extra indexes on the parent of the nested table, changed a couple of columns and object names to quoted mixed case, and everything still seemed to work okay.

It turned out that the OP had nested tables that were nested 5 deep – which means I need a hierarchical query to connect the 2nd to 5th (or nth) layer of nesting back to the parent to report the total volume against the top level table.

Watch this space – but not for a day or two.

 

Once-Reticent Utilities Accelerating Move to the Cloud

Oracle Press Releases - Wed, 2019-11-06 07:00
Press Release
Once-Reticent Utilities Accelerating Move to the Cloud Survey shows desire to boost customer experience driving utilities’ cloud adoption, although cybersecurity remains a top concern

Redwood Shores, Calif.—Nov 6, 2019

A large majority (71 percent) of utilities now use cloud software, up from just 45 percent three years ago according to a new Oracle survey. Improving customer experience was noted as the key driver for increased cloud investment. But while progress has been made, 85 percent of respondents still cite security as a top concern and noted that regulatory acceptance remains a barrier to cloud adoption.

“Just a few years ago, utilities looked at cloud computing with skepticism at best. Today, many realize it’s essential to their survival,” said Brian Bradford, vice president, industry solutions, Oracle. “Utilities are increasingly seeing cloud as a fundamental asset in meeting ever-evolving expectations, mitigating security concerns, and turning data into an opportunity to modernize and evolve their operations to better serve constituents and customers.”

The report, conducted by research firm Zpryme, surveyed more than 150 global utility leaders to better understand the pace and breadth of cloud computing adoption. Respondents spanned investor-owned, municipal, cooperative and district/federal utilities representing electric, gas, water, and renewables. To download the full report, visit: https://etsinsights.com/reports/the-acceleration-of-cloud-computing-for-utilities/

Changing Customer Expectations

Customers are increasingly looking for more information regarding their overall utility usage and bill and how to lessen the impact of both. And they expect this with the high level of service and personalization that they are enjoying in other sectors, such as retail and banking. Utilities see cloud technologies as a means to provide the speed and control to meet these customer expectations. So, it was no surprise to see continued investment in customer experience and engagement (31 percent) and customer information (29 percent) cloud systems in the top five investment priorities for utilities in the coming years.

Drowning in Data

Sixty-four percent of utilities noted that cloud computing is critical to my company’s future success. A key driver is the need to better manage an ever-growing pool of data coming from smart meters, IoT sensors, customers’ home energy devices, and more. Utilities realize that it is becoming too great a task and risk to manage this influx without a significant investment in tools that can capture and analyze grid edge, supply chain, and customer data quickly. Utilities hope to use the technological flexibility provided by cloud computing to innovate by using data to solve business problems from grid optimization to managing distributed energy resources (DERs).

“Utilities are seeing the impact that digital transformation is having on their business model,” said Jason Rodriguez, CEO of Zpryme. “The challenge is so much greater than simply capturing millions of disparate data points. Utilities need IT tools to manage and drive better business decisions and cloud computing provides the agility to more effectively integrate data predictively use it.”

Security Concerns Remain

Utilities remain concerned about security (85 percent) and privacy (81 percent) and also rated these as the biggest barriers they face when it comes to using or expanding cloud computing. Concerns, however, were not around the vulnerability of cloud computing technology itself, but rather a growing barrage of increasingly sophisticated cybersecurity threats. Whether it is IoT devices, field area networks, or 5G networks, utilities are operating in a hyper-connected digital ecosystem that can provide new and varied opportunities for nefarious actors to expose their networks. Utilities recognize that they need to stay vigilant and that will require help from both regulators and vendors.

Regulatory Right-Sizing

While improving, many regulators globally still don’t allow for utilities to earn a rate of return on their cloud investments, which impedes upgrading legacy IT systems. While utilities recognize the role that regulators play in approving their prudent expenditures, 26 percent cited regulatory acceptance as a continued barrier to cloud adoption. The majority of utility respondents (75 percent), however, also note that regulators can play a strong role in helping protect customer privacy and security.

Clouds Ahead

Many utilities dipped their toe in the cloud waters with enterprise applications, such as enterprise resource management (ERP) or human capital management (HCM) and 74 percent plan to spend more on cloud in the next 3-5 years. While a move to better serve and engage customers is driving this next phase of cloud adoption, operational system investments are not far behind. Forty-three percent of utilities have increased their operational spending by at least 25 percent over 2015 levels.

To learn more about Oracle’s utility solutions, please visit: www.oracle.com/utilities

Contact Info
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1.925.787.6744

Turning On or Tuning Out? Aiconix Applies AI to See What Holds TV Viewers’ Interest

Oracle Press Releases - Wed, 2019-11-06 06:00
Blog
Turning On or Tuning Out? Aiconix Applies AI to See What Holds TV Viewers’ Interest

By Guest Blogger, Oracle—Nov 6, 2019

Why does one video go viral and another lose viewers? News agencies, publishers, and entertainment companies are becoming increasingly methodical about measuring audiences’ engagement. Aiconix, a German startup based in Hamburg, is applying artificial intelligence to ferret out the insights needed to help organizations answer those questions.

Deploying groups of testers in a neurolab in its home city, aiconix uses electroencephlogram technology to measure viewers’ emotions, including surprise or boredom, while watching video clips. It’s developed AI software called aingine based on data that can predict whether certain factors in clips, such as abrupt subject matter shifts or too little connection between a main theme and protagonist’s words, cause viewers to switch off. In addition, the software can identify what roles speakers play in a video, help journalists quickly identify and label subjects in video clips, and automatically produce transcripts with time codes, taking away some of the grunt work of TV production.

Thanks to a reliable and high-performing IT cloud infrastructure from Oracle, aiconix can offer its customers fast, secure performance with the ability to scale. The company, part of the Oracle Global Startup Ecosystem, is using Oracle Cloud Infrastructure, including state-of-the-art graphics processors, to train its AI system. Oracle’s startup program connects promising startups with Oracle’s cloud resources and enterprise expertise.

“We needed a reliable data center in Germany where our aingine could run, scale, and grow with us,” says Eugen Gross, CEO and co-founder of aiconix. “Oracle Cloud Infrastructure provides us with a range of cloud resources and features we need. In addition, being part of the Global Startup Ecosystem gives us the personal support and direct access to customers.”

So far, the company has attracted customers at organizations including a large European press agency and a German news photography agency.

Read More Stories from Oracle Cloud

Aiconix is one of the thousands of customers on its journey to the cloud. Read about others in Stories from Oracle Cloud: Business Successes

How to scale up a Patroni cluster

Yann Neuhaus - Wed, 2019-11-06 04:43

During the preparation of my presentation for the pgconf.eu I ran into one big issue. I had to stop my cluster to add a new node. That was not the way I wanted to archive this. I want a high availability solution, that can be scaled up without any outage. Due to a little hint during the pgconf.eu I was able to find a solution. In this post I will show the manually scale up, without using a playbook.

Starting position

We start with a 3 node patroni cluster which can be created using this blog post.
Now we want to add a fourth node to the existing etcd and Patroni cluster. In case you also need a playbook to install a forth node, check out my GitHub repository.

Scale up the etcd cluster

This step is only needed, when you want to scale up your etcd cluster as well. To scale up a Patroni cluster it is not necessary to scale up etcd cluster. You can, of course, scale up Patroni without adding more etcd cluster members. But maybe someone also needs to scale up his etcd cluster and searches for a solution. If not, just jump to the next step.

Be sure the etcd and patroni service are not started on the forth node.

postgres@patroni4:/home/postgres/ [PG1] systemctl status etcd
● etcd.service - dbi services etcd service
   Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
postgres@patroni4:/home/postgres/ [PG1] systemctl status patroni
● patroni.service - dbi services patroni service
   Loaded: loaded (/etc/systemd/system/patroni.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
postgres@patroni4:/home/postgres/ [PG1]

Make the following adjustments in the etcd.conf of the 4th node.

postgres@patroni4:/home/postgres/ [PG1] cat /u01/app/postgres/local/dmk/etc/etcd.conf
name: patroni4
data-dir: /u02/pgdata/etcd
initial-advertise-peer-urls: http://192.168.22.114:2380
listen-peer-urls: http://192.168.22.114:2380
listen-client-urls: http://192.168.22.114:2379,http://localhost:2379
advertise-client-urls: http://192.168.22.114:2379
initial-cluster-state: 'existing'
initial-cluster: patroni1=http://192.168.22.111:2380,patroni2=http://192.168.22.112:2380,patroni3=http://192.168.22.113:2380,patroni4=http://192.168.22.114:2380

Next add the new etcd member to the existing etcd cluster. You can execute that on every existing member of the cluster.

postgres@patroni1:/home/postgres/ [PG1] etcdctl member add patroni4 http://192.168.22.114:2380
Added member named patroni4 with ID dd9fab8349b3cfc to cluster

ETCD_NAME="patroni4"
ETCD_INITIAL_CLUSTER="patroni4=http://192.168.22.114:2380,patroni1=http://192.168.22.111:2380,patroni2=http://192.168.22.112:2380,patroni3=http://192.168.22.113:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

Now you can start the etcd service on the 4th node.

postgres@patroni4:/home/postgres/ [PG1] sudo systemctl start etcd
postgres@patroni4:/home/postgres/ [PG1] systemctl status etcd
● etcd.service - dbi services etcd service
   Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-10-17 16:39:16 CEST; 9s ago
 Main PID: 8239 (etcd)
   CGroup: /system.slice/etcd.service
           └─8239 /u01/app/postgres/local/dmk/bin/etcd --config-file /u01/app/postgres/local/dmk/etc/etcd.conf
postgres@patroni4:/home/postgres/ [PG1]

And after a short check, we can see, that Node 4 is added to the existing cluster

postgres@patroni4:/home/postgres/ [PG1] etcdctl member list
dd9fab8349b3cfc: name=patroni4 peerURLs=http://192.168.22.114:2380 clientURLs=http://192.168.22.114:2379 isLeader=false
16e1dca5ee237693: name=patroni1 peerURLs=http://192.168.22.111:2380 clientURLs=http://192.168.22.111:2379 isLeader=false
28a43bb36c801ed4: name=patroni2 peerURLs=http://192.168.22.112:2380 clientURLs=http://192.168.22.112:2379 isLeader=false
5ba7b55764fad76e: name=patroni3 peerURLs=http://192.168.22.113:2380 clientURLs=http://192.168.22.113:2379 isLeader=true
Scale up Patroni

Scale up the Patroni cluster is also really easy.
Adjust the host entry in the patroni.yml on the new node.

postgres@patroni4:/home/postgres/ [PG1] cat /u01/app/postgres/local/dmk/etc/patroni.yml | grep hosts
  hosts: 192.168.22.111:2379,192.168.22.112:2379,192.168.22.113:2379,192.168.22.114:2379

Afterwards, start the Patroni service.

postgres@patroni4:/home/postgres/ [PG1] sudo systemctl start patroni
postgres@patroni4:/home/postgres/ [PG1] systemctl status patroni
● patroni.service - dbi services patroni service
   Loaded: loaded (/etc/systemd/system/patroni.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-10-17 17:03:19 CEST; 5s ago
  Process: 8476 ExecStartPre=/usr/bin/sudo /bin/chown postgres /dev/watchdog (code=exited, status=0/SUCCESS)
  Process: 8468 ExecStartPre=/usr/bin/sudo /sbin/modprobe softdog (code=exited, status=0/SUCCESS)
 Main PID: 8482 (patroni)
   CGroup: /system.slice/patroni.service
           ├─8482 /usr/bin/python2 /u01/app/postgres/local/dmk/bin/patroni /u01/app/postgres/local/dmk/etc/patroni.yml
           ├─8500 /u01/app/postgres/product/11/db_5/bin/postgres -D /u02/pgdata/11/PG1/ --config-file=/u02/pgdata/11/PG1/postgresql.conf --listen_addresses=192.168.22.114 --max_worker_processes=8 --max_locks_per_transact...
           ├─8502 postgres: PG1: logger
           ├─8503 postgres: PG1: startup   waiting for 000000020000000000000006
           ├─8504 postgres: PG1: checkpointer
           ├─8505 postgres: PG1: background writer
           ├─8506 postgres: PG1: stats collector
           ├─8507 postgres: PG1: walreceiver
           └─8513 postgres: PG1: postgres postgres 192.168.22.114(48882) idle

To be sure, everything runs correctly, check the status of the Patroni cluster

postgres@patroni4:/home/postgres/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 |        | running |  2 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  2 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 | Leader | running |  2 |       0.0 |
|   PG1   | patroni4 | 192.168.22.114 |        | running |  2 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
Conclusion

Using the playbooks had one failure. The entry for host in the patroni.yml is only checking localhost. When starting the fourth node, Patroni is not looking for all the other hosts, it is just looking for its own availability. This works fine in an initial cluster, but not when you want to extended one.
And: Always keep in mind, you need an uneven number of members for an etcd cluster, don’t add only a forth etcd node.

Cet article How to scale up a Patroni cluster est apparu en premier sur Blog dbi services.

Galera Cluster 4 with MariaDB 10.4.8

Yann Neuhaus - Wed, 2019-11-06 02:31

Last month, by a new customer I had to install the latest version of the MariaDB server: 10.4.8 to setup a Galera Cluster with 3 master nodes.
The good news was that this version was shipped with the latest version of the Galera Plugin from Codership: Galera Cluster 4.0.
As usual, installation & configuration was quitte easy.

$ sudo yum -y install MariaDB-server
$ sudo yum list installed|grep -i mariadb
MariaDB-client.x86_64 10.4.8-1.el7.centos @mariadb-main
MariaDB-common.x86_64 10.4.8-1.el7.centos @mariadb-main
MariaDB-compat.x86_64 10.4.8-1.el7.centos @mariadb-main
MariaDB-server.x86_64 10.4.8-1.el7.centos @mariadb-main
galera.x86_64 26.4.2-1.rhel7.el7.centos @mariadb-main
New Features:

But now I just want to introduce some of the new interesting and high level features available in this version.

Streaming Replication

In the previous versions, when we had large and long-running write transactions, we had always conflicts during
the Certification Based Replication and because of this, often transactions were aborted and rolled back.
Some of our customers were really suffering and complaining because of this problem and limitation.
Now, when there will be a big transaction, the node who initiated the transaction will not have to wait till the “commit”
but will break it into fragments, will certify it and will replicate it on the other master nodes while still the transaction will be running.

New System Tables

When having a look to the mysql database, we can see 3 new system tables.
They are containing informations that are already in status variables but now they will be persistent:

MariaDB [mysql]> show tables from mysql like 'wsrep_%';
+---------------------------+
| Tables_in_mysql (wsrep_%) |
+---------------------------+
| wsrep_cluster             |
| wsrep_cluster_members     |
| wsrep_streaming_log.      |
+---------------------------+
3 rows in set (0.000 sec)

These 3 new tables should bring to the database administrators a better overview of the current status of the cluster.

MariaDB [mysql]> select * from wsrep_cluster;
+--------------------------------------+---------+------------+------------------+--------------+
| cluster_uuid                         | view_id | view_seqno | protocol_version | capabilities |
+--------------------------------------+---------+------------+------------------+--------------+
| 6c41b92b-e0f9-11e8-9924-de3112d0ce21 | 3       | 1967.      | 4.               | 184703       |
+--------------------------------------+---------+------------+------------------+--------------+
1 row in set (0.000 sec)

cluster_uuid is the uuid of the cluster, corresponding to the status variable: wsrep_cluster_state_uuid
view_id is the number of cluster configuration changes, corresponding to the status variable wsrep_cluster_conf_id
view_seqno is the latest Galera sequence number, corresponding to the status variable: wsrep_last_committed
protocol_version is the MariaDB wsrep patch version, corresponding to the status variable: wsrep_protocol_version
capabilities is the capabilities bitmask provided by the Galera library.

MariaDB [mysql]> select * from wsrep_cluster_members;
+--------------------------------------+--------------------------------------+-----------+-----------------------+
| node_uuid                            | cluster_uuid                         | node_name | node_incoming_address |
+--------------------------------------+--------------------------------------+-----------+-----------------------+
| 6542be69-ffd5-11e9-a2ed-a363df0547d5 | 6c41b92b-e0f9-11e8-9924-de3112d0ce21 | node1     | AUTO                  |
| 6ae6fec5-ffd5-11e9-bb70-da54860baa6d | 6c41b92b-e0f9-11e8-9924-de3112d0ce21 | node2.    | AUTO                  |
| 7054b852-ffd5-11e9-8a45-72b6a9955d28 | 6c41b92b-e0f9-11e8-9924-de3112d0ce21 | node3     | AUTO                  |
+--------------------------------------+--------------------------------------+-----------+-----------------------+
3 rows in set (0.000 sec)

This system table display the current membership of the cluster.
It contains a row for each node and member in the cluster.
node_uuid is the unique identifier of the master node.
cluster_uuid is the unique identifier of the cluster. It must be the same for all members.
node_name is explicit
node_incoming_address stores the IP address and port for client connections.

MariaDB [mysql]> select * from wsrep_streaming_log;
Empty set (0.000 sec)

This system table will contains rows only if there is a transaction which have the “streaming replication” enabled

Synchronization Functions

This new SQL functions can be used in wsrep synchronization operations.
It is possibble to use them to obtain the GTID (Global Transaction ID)
WSREP_LAST_SEEN_GTID(): returns the GTID of the last write transaction observed by the client
WSREP_LAST_WRITTEN_GTID(): returns the GTID of the last write transaction made by the client
WSREP_SYNC_WAIT_UPTO_GTID(): blocks the client until the node applies and commits the given transaction

Conclusion:

These new features and especially the streaming replication, which is really an improvement and a huge boost to large
transaction support, should bring to users and dba’s satisfaction and hopefully a better opinion of the MariaDB Galera cluster.
In another blog, I will try to demonstrate how this streaming replication works.

Cet article Galera Cluster 4 with MariaDB 10.4.8 est apparu en premier sur Blog dbi services.

Create a Kubernetes cluster with Google Kubernetes Engine

Yann Neuhaus - Wed, 2019-11-06 01:16

Nowadays the market for cloud providers is very competitive. Large companies are fighting a very hard battle over the services they provide. Each offers a wide range of more or less identical products with specific features for each.

In my point of view, having deployed Kubernetes clusters in several environments (Cloud and On-Premise), I pay particular attention to Google Cloud for its Google Kubernetes Engine offer. The deployment of a Kubernetes cluster is very fast and allows us to have a test/production environment in a few minutes.

Therefore, in this blog post, we will explain how to create a Kubernetes cluster in Google Cloud with some useful additional resources.

Prerequisites

A Google account is needed. You can create one by following the sign-up link: https://cloud.google.com. Otherwise, you can use the free tier account: https://cloud.google.com/free/?hl=en.

Create your project

Go to the cloud portal through the following link: https://console.cloud.google.com/home/dashboard

The first step is the creation of a project. Before creating a resource, you will need to create a project in order to encapsulate all your resources within it. To properly create a project, follow the below steps:

Enter your project name and click on create:

After a few seconds, your project will be created, and you will have access to the home dashboard:

Create your cluster

Once the project is created and ready to use, let’s create now our Kubernetes cluster. Click on the Kubernetes Engine menu and clusters sub-menu to begin the creation process.

Once the Kubernetes Engine API is enabled, we can click on the create cluster button and configured our cluster as needed.

We choose a standard cluster with 3 cluster nodes. You can edit the resources of your cluster according to your needs. For our example, we kept the default configuration provided by the API.

Click on the create button and after a few minutes, your cluster is ready for usage.

Start using your Kubernetes cluster

Google SDK is needed to use your Kubernetes cluster in your favorite client platform. To install Google SDK follow the instructions here:

SDK Cloud is properly installed, we can now initialize our environment by the following steps:

mehdi@MacBook-Pro: gcloud init
Welcome! This command will take you through the configuration of gcloud.
 
Settings from your current configuration [default] are:
core:
  account: mehdi.bada68@gmail.com
  disable_usage_reporting: 'True'
  project: jx-k8s-2511
 
Pick configuration to use:
 [1] Re-initialize this configuration [default] with new settings
 [2] Create a new configuration
Please enter your numeric choice:  1
 
Your current configuration has been set to: [default]
 
You can skip diagnostics next time by using the following flag:
  gcloud init --skip-diagnostics
 
Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic passed (1/1 checks passed).
 
Choose the account you would like to use to perform operations for
this configuration:
 [1] mehdi.bada68@gmail.com
 [2] Log in with a new account
Please enter your numeric choice:  1
 
You are logged in as: [mehdi.bada68@gmail.com].
 
Pick cloud project to use:
 [1] kubernetes-infra-258110
 [2] Create a new project
Please enter numeric choice or text value (must exactly match list
item):  1
 
Your current project has been set to: [kubernetes-infra-258110].
 
Do you want to configure a default Compute Region and Zone? (Y/n)?  Y
 
Which Google Compute Engine zone would you like to use as project
default?
If you do not specify a zone via a command-line flag while working
with Compute Engine resources, the default is assumed.
 
Please enter numeric choice or text value (must exactly match list
item):  8

Login now to gcloud :

mehdi@MacBook-Pro: gcloud auth login
… 
You are now logged in as [mehdi.bada68@gmail.com].
Your current project is [kubernetes-infra-258110].  You can change this setting by running:
  $ gcloud config set project PROJECT_ID

Update your ~./kube/config file with the credentials of the new cluster created before:

mehdi@MacBook-Pro: gcloud container clusters get-credentials standard-cluster-1
Fetching cluster endpoint and auth data.
kubeconfig entry generated for standard-cluster-1.

Your kubectl client is now connected to your remote GKE cluster.

mehdi@MacBook-Pro: kubectl get nodes -o wide
NAME                                                STATUS   ROLES    AGE   VERSION          INTERNAL-IP   EXTERNAL-IP     OS-IMAGE                             KERNEL-VERSION   CONTAINER-RUNTIME
gke-standard-cluster-1-default-pool-1ac453ab-6tj4   Ready       56m   v1.13.11-gke.9   10.128.0.3    34.70.191.147   Container-Optimized OS from Google   4.14.145+        docker://18.9.7
gke-standard-cluster-1-default-pool-1ac453ab-s242   Ready       56m   v1.13.11-gke.9   10.128.0.4    35.188.3.165    Container-Optimized OS from Google   4.14.145+        docker://18.9.7
gke-standard-cluster-1-default-pool-1ac453ab-w0j0   Ready       56m   v1.13.11-gke.9   10.128.0.2    34.70.107.231   Container-Optimized OS from Google   4.14.145+        docker://18.9.7
Deploy Kubernetes Dashboard

After configuring the kubectl client we can start deploying resources on the Kubernetes cluster. One of the most popular resources in Kubernetes is the dashboard. It allows users and admin having a graphical view of all cluster resources.

Download the dashboard deployment locally:

curl -o dashboard.yaml  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

Then apply the deployment:

mehdi@MacBook-Pro: kubectl apply -f dashboard.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

Create an admin Service Account and Cluster Role Binding that you can use to securely connect to the dashboard with admin-level permissions:

mehdi@MacBook-Pro: vi admin-sa.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard

mehdi@MacBook-Pro: kubectl apply -f admin-sa.yaml
serviceaccount/admin created
clusterrolebinding.rbac.authorization.k8s.io/admin created

First, retrieve the authentication token for the admin service account, as below:

mehdi@MacBook-Pro: kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin | awk '{print $1}')
Name:         admin-token-dpsl9
Namespace:    kubernetes-dashboard
Labels:       
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: 888de3dc-ffff-11e9-b5ca-42010a800046

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1119 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1kcHNsOSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijg4OGRlM2RjLWZmZmYtMTFlOS1iNWNhLTQyMDEwYTgwMDA0NiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.DBrfylt1RFDpHEuTy4l0BY-kRwFqm9Tvfne8Vu-IZVghy87vVWtsCatjt2wzCtMjX-I5oB0YAYmio7pTwPV-Njyd_VvbWupqOF7yiYE72ZXri0liLnQN5qbtyOmswsjim0ehG_yQSHaAqp21cQdPXb59ItBLN7q0-dh8wBRyOMAVLttjbmzBb02XxtJlALYg8F4hAkyHjJAzHAyntMylUXyS2gn471WUYFs1usDDpA8uZRU3_K6oyccXa-xqs8kKRB1Ch6n4Cq9TeMKkoUyv0_alEEQvwkp_uQCl2Rddk7bLNnjfDXDPC9LXOT-2xfvUf8COe5dO-rUXemHJlhPUHw

Copy the token value.

Access to the Kubernetes dashboard using the kubectl proxy command line.

mehdi@MacBook-Pro: kubectl proxy
Starting to serve on 127.0.0.1:8001

The dashboard is now available in the following link: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

Choose the token authentication and paste the value from the previous output.

You have now access to the Kubernetes dashboard and deployed your first Kubernetes resource!

Deploy an Ingress Load Balancer

In order to access your cluster service externally, we need to create an ingress load balancer for our GKE cluster. The ingress load balancer will make HTTP/HTTPS applications accessible publicly through the creation of an external IP address for the cluster.

Before creating the ingress, we need to deploy a test application for our example. Let’s deploy an NGINX server.

mehdi@MacBook-Pro: vi nginx-deployment.yaml

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
    - port: 80
  selector:
    app: nginx


mehdi@MacBook-Pro: kubectl apply -f nginx-deployment.yaml

deployment.apps/nginx-deployment unchanged
service/nginx created

Create the ingress resource and deploy it as following:

mehdi@MacBook-Pro: vi basic-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
rules:
- http:
paths:
- backend:
serviceName: nginx
servicePort: 80

mehdi@MacBook-Pro: kubectl apply -f basic-ingress.yaml
ingress.extensions/basic-ingress created

Verify the status of the ingress:

mehdi@MacBook-Pro: kubectl get ing -o wide
NAME            HOSTS   ADDRESS         PORTS   AGE
basic-ingress   *       34.102.214.94   80      8h

The ingress resources have been properly created. We can see the result directly from the Google Cloud dashboard.

The NGINX service is now available via the Ingress Load Balancer and can be accessed through:

Cet article Create a Kubernetes cluster with Google Kubernetes Engine est apparu en premier sur Blog dbi services.

Which is Bigger – KB or MB?

VitalSoftTech - Tue, 2019-11-05 09:49

A computer noob like myself often gets confused about storage units in terms of the memory of a computer. All these KBs, MBs and GBs boggle my mind. Does that happen to you too? The question that baffles me the most is, which is Bigger – KB or MB? In addition to answering this question, […]

The post Which is Bigger – KB or MB? appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle Honors Excellence in Construction and Engineering

Oracle Press Releases - Tue, 2019-11-05 09:18
Press Release
Oracle Honors Excellence in Construction and Engineering Customer honorees in the Americas region include Austin Commercial, Ameren, BlueCross BlueShield of South Carolina, DPR Construction and Facebook

Redwood Shores, Calif.—Nov 5, 2019

Oracle is today recognizing those Americas customers who are charting new paths and reimagining the building industry through technology with the 2019 Oracle Construction and Engineering Excellence Awards. Building on the Oracle Aconex Connect Awards, this new awards program highlights noteworthy customers that are successfully improving project delivery across industries.

Project- and asset-intensive organizations face numerous challenges, including growing project complexity, rising competition, fragmented project supply chains, and increasing risks. Selected from a large number of entrants, this year’s winners demonstrated compelling results in overcoming these challenges through their use of Oracle’s Construction and Engineering solution suite, including Oracle’s Primavera solutions, Oracle Aconex and Oracle Textura Payment Management.

“The Oracle Construction and Engineering Excellence Awards winners represent organizations of all sizes that have demonstrated remarkable success from the digital transformation of their businesses across the project and asset lifecycle,” said Mark Webster, senior vice president and general manager, Oracle Construction and Engineering. “From significant productivity gains, to better payment outcomes across the supply chain, to unlocking project intelligence for smarter decision making, the improvements realized by these organizations underscore their leadership in driving project delivery forward, with benefits that extend to all stakeholders. We congratulate the honorees and are thrilled to recognize their accomplishments.”

Select results from the 2019 Oracle Construction and Engineering Excellence Awards winners:

  • Austin Commercial achieved enterprise-wide standardization with Oracle Textura Payment Management, reducing pay application processing time from four to five days to just two days.
  • Ameren Corporation reduced the time spent evaluating project justifications by 90 percent (from 30-60 minutes, to just three to five) with Oracle’s Primavera P6 Enterprise Project Portfolio Management (EPPM) and Oracle’s Primavera Unifier.
  • BlueCross BlueShield of South Carolina accomplished organizational transformation with the help of Oracle’s Primavera Unifier and is now able to close internal work orders 50 percent faster than previously.
  • DPR Construction eliminated 360 accounting hours per week through its implementation of Oracle Textura Payment Management and a direct link to its ERP system, meaning tens of millions of dollars now reach subcontractors up to seven days faster.
  • Sage Homes Northwest sped up work package assignment to new projects by 66 percent utilizing Oracle Primavera Cloud.
 

Customers were nominated for the 2019 Oracle Construction and Engineering Excellence Awards in the following categories:

  • Data-driven Insights
  • Enterprise-wide Standardization
  • Innovation in Sustainability
  • Innovator of the Year (Individual award)
  • Platform Expansion
  • Social Impact Project of the Year
  • Solutions Implementation
  • Systems Integration
 

In addition, honorees were recognized for their achievements in the following segments:  Energy and Resources, Industrial Manufacturing, Public Infrastructure, and Residential and Commercial.

Below is a full list of 2019 Oracle Construction and Engineering Excellence Awards winners in the Americas:

  • Ameren Corporation (Ameren) – Platform Expansion in Energy and Resources, Primavera P6 Enterprise Project Portfolio Management (EPPM), Primavera Unifier
  • AMP United, LLC – Enterprise-wide Standardization in Residential and Commercial, Oracle Primavera Cloud
  • ATCO Ltd. (ATCO) – Enterprise-wide Standardization in Energy and Resources, Primavera P6 EPPM, Primavera Unifier
  • Austin Industries (Austin Commercial) – Enterprise-wide Standardization in Public Infrastructure, Oracle Textura Payment Management
  • BlueCross BlueShield of South Carolina – Enterprise-wide Standardization in Residential and Commercial, Primavera Unifier
  • DPR Construction – Systems Integration in Residential and Commercial, Oracle Textura Payment Management
  • Facebook – Innovator of the Year, Jason Mennig of the Data Center Engineering and Construction business unit, and Company award for Solutions Implementation in Residential and Commercial, Primavera P6 EPPM
  • FirstEnergy Corp. – Systems Integration in Energy and Resources, Primavera P6 EPPM, Primavera Unifier and Oracle Primavera Portfolio Management (OPPM)
  • Hampton Roads Sanitation District – Systems Integration in Public Infrastructure, Primavera P6 EPPM, Primavera Unifier
  • Intercorp Retail – Solutions Implementation in Residential and Commercial, Primavera Unifier
  • Lendlease Corporation – Innovator of the Year for Data-driven Insights in Residential and Commercial, Michael Mull, Oracle Textura Payment Management, Textura Business Analytics and Innovator of the Year for Enterprise-wide Standardization in Residential and Commercial, Dominic Giannola, Oracle’s Primavera P6 EPPM
  • Matthew Harris – Innovator of the Year for Solutions Implementation in Public Infrastructure, Oracle Aconex
  • Monteith Construction Corp. – Solutions Implementation in Residential and Commercial, Oracle Primavera Cloud
  • Oceaneering International, Inc. – Innovator of the Year for Enterprise-wide Standardization in Energy and Resources, Dalbert Varnell Jr., Primavera P6 EPPM
  • Sage Homes Northwest – Systems Integration in Residential and Commercial, Oracle Primavera Cloud
  • VESTA – Enterprise-wide Standardization in Residential and Commercial, Oracle’s Primavera Unifier
 

Read about our Global 2019 Oracle Construction and Engineering Excellence Awards winners: https://www.oracle.com/corporate/awards/construction-engineering/winners.html

Contact Info
Judi Palmer
Oracle
+1.650.784.7901
judi.palmer@oracle.com
Brent Curry
H+K Strategies
+1 312.255.3086
brent.curry@hkstrategies.com
About Oracle Construction and Engineering

Asset owners and project leaders rely on Oracle Construction and Engineering solutions for the visibility and control, connected supply chain, and data security needed to drive performance and mitigate risk across their processes, projects, and organization. Our scalable cloud solutions enable digital transformation for teams that plan, build, and operate critical assets, improving efficiency, collaboration, and change control across the project lifecycle. www.oracle.com/construction-and-engineering.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1.650.784.7901

Brent Curry

  • +1 312.255.3086

Australia’s Favorite Department Store Gets More Agile with Oracle

Oracle Press Releases - Tue, 2019-11-05 07:00
Press Release
Australia’s Favorite Department Store Gets More Agile with Oracle Myer implements Oracle Retail technology to turbocharge merchandise, inventory, and planning processes

Redwood Shores, Calif. and Melbourne, Austrailia—Nov 5, 2019

Myer, Australia’s largest department store chain, embarked on a digital transformation to deliver more compelling products to customers. Carrying many products across 61 stores, Myer offers a wide range of cosmetics, clothing, footwear, toys, and homewares. By embracing Oracle Retail, Myer transformed its end-to-end merchandise systems, inventory processes, and planning capabilities, resulting in reduced stock and improved sales, margin, and markdown reduction. Today, Myer is better able to deliver the right inventory to delight customers across its properties.

“I am optimistic about the ability of our Oracle Merchandise Planning tools to improve the efficiency of our buying teams and to deliver a more profitable, more localised product range for our customers,” said Allan Winstanley, Chief Merchandise Officer, Myer.

The project was a business-led initiative supported by IT. The team established clear goals and expectations to achieve immediate business benefits and support long-term growth objectives. Myer built a solid foundation with Oracle Retail Merchandise Financial Planning and Oracle Retail Size Profile Optimization with support from Tata Consulting Services (TCS), a Platinum and Cloud Elite level member of Oracle PartnerNetwork (OPN).

Phase one of the implementations delivered immediate results with a stock reduction in time for holiday planning. In phase two, a cross-functional solution development team adopted an agile implementation methodology. The team worked in three weeks sprints to deliver regular solution functionality for seasonal strategies, item planning, clustering, option planning, and assortment planning from Oracle Retail Planning and Optimization. This approach allowed Myer to enjoy accelerated benefits and the project paid for itself.

“The project was delivered across various teams, including Merchandise, IT and Tata Consulting Services (TCS), where we transitioned from waterfall to an agile project delivery approach, which ensured the success of the project,” said Troy Smith, Chief Information Officer, Myer.

“Automate and simplify. We always encourage our customers to embrace these attributes and focus on operational excellence,” said Mike Webster, senior vice president and general manager, Oracle Retail. “By establishing a strong foundation, Myer can pivot to the customer and localize the assortment based on store demographics, climate, and other store attributes.”

The Myer Oracle project received multiple awards including the 2018 ACS Gold Disruptor Award, 2018-2019TCS APAC 4.0 EXCEL ERATE Best Project Embracing Business 4.0, and the 2017- 2018 Myer inspirational Team Award.

Contact Info
Kaitlin Ambrogio
Oracle PR
+1.781.434.9555
kaitlin.ambrogio@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website, www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kaitlin Ambrogio

  • +1.781.434.9555

Handling PostgreSQL installations from packages

Yann Neuhaus - Sun, 2019-11-03 13:18
In this blog I will show how to handle a PostgreSQL installation with a customized PGDATA using the packages provided by the PostgreSQL community.

One issue with the packages is the hard coded PGDATA, which will be overwritten in the Servicefile with each update of PostgreSQL. This blog entry based on PostgreSQL 12 with CentOS 7 and CentOS 8.

On a minimal installation in my mind a few things are missing, the net-tools package and nano as editor, I’m a friend of using nano instead of vi.

CentOS 7:

$ yum install net-tools
$ yum install nano

CentOS 8:

$ dnf install net-tools
$ dnf install nano

For using the PostgreSQL repository it is important to exclude PostgreSQL from the CentOS Repository.

By using CentOS 7 you need to edit the CentOS-Base repofile to exclude PostgreSQL from Base and Updates.

$ nano /etc/yum.repos.d/CentOS-Base.repo

# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#

[base]
name=CentOS-$releasever - Base
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#exclude PostgreSQL from os repository 
exclude=postgresql* 

#released updates
[updates]
name=CentOS-$releasever - Updates
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$inf$
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#exclude PostgreSQL from os repository 
exclude=postgresql*

#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus&infra=$$
[ Read 46 lines ]

By using CentOS 8 it is just one command to exclude PostgreSQL from the distribution repository:

$ dnf -y module disable postgresql

Add PostgreSQL Repository to CentOS 7, in this example it is ProstgreSQL 12

$ yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm

And the same for CentOS 8

$ dnf install https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm

Now it is time to install PostgreSQL 12 out of the PostgreSQL repository BUT NO INITDB at the moment.

CentOS 7:

$ yum install postgresql12 postgresql12-server postgresql12-contrib

CentOS 8:

$ dnf install postgresql12 postgresql12-server postgresql12-contrib

Now it is time to create the override file to the PostgreSQL Service file, the steps are identical on CentOS 7 and CentOS 8.

In my example PGDATA is in /pg_data/12/data mounted as own volume.

So edit the postgresql-12.service file with sysctl edit:

$ systemctl edit postgresql-12.service

And add the needed content for your customized PGDATA:

[Service]
Environment=PGDATA=/pg_data/12/data

Save the change, it will create a /etc/systemd/system/postgresql-12.service.d/override.conf file which will be merged with the original service file.

To check the content:

$ cat /etc/systemd/system/postgresql-12.service.d/override.conf
[Service]
Environment=PGDATA=/pg_data/12/data

Reload Systemd

$ systemctl daemon-reload

Hopefully your PGATA is owned by the postgres user if not make sure that it is:

$ chown -R postgres:postgres /pg_data/

Create the PostgreSQL instance as root user:

$ /usr/pgsql-12/bin/postgresql-12-setup initdb
Initializing database ... OK

Here it is:

[root@centos-8-blog /]# cd /pg_data/12/data/
[root@centos-8-blog data]# ls
base          pg_dynshmem    pg_multixact  pg_snapshots  pg_tblspc    pg_xact
global        pg_hba.conf    pg_notify     pg_stat       pg_twophase  postgresql.auto.conf
log           pg_ident.conf  pg_replslot   pg_stat_tmp   PG_VERSION   postgresql.conf
pg_commit_ts  pg_logical     pg_serial     pg_subtrans   pg_wal

From now on PostgreSQL minor updates will be done with yum update on CentOS 7 or dnf update on CentOS 8 in one step, no extra downtime for it.

But be careful, before running yum update or dnf update STOP ALL POSTGRESQL INSTANCES!

This is also working in environments with many instances, you need a service file and an override.conf for each instance, an additional instance needs to be created with initdb -D and not with PostgreSQL-12-setup initdb.

This method is also working with SLES 12.

 

Cet article Handling PostgreSQL installations from packages est apparu en premier sur Blog dbi services.

Bridge network missing Gateway – Docker Issue

DBASolved - Sun, 2019-11-03 11:41

Here is a little something for you.  I’m working on building a demo of Oracle GoldenGate Microservices between three (3) containers. In order to do this, I wanted to setup a dedicated network between the containers. In order to setup a dedicated network, I needed to configure a network for the containers to use.  Docker […]

The post Bridge network missing Gateway – Docker Issue appeared first on DBASolved.

Categories: DBA Blogs

My Personal Thanks to the Chicago Police Department - the First Real Proving Ground for Oracle APEX

Joel Kallman - Sat, 2019-11-02 07:18

In 2001, the Chicago Police Department took a chance on APEX.  And with all thanks to them for the opportunity they provided us, Oracle APEX is what it is today.  We owe them a big debt of gratitude.  Let me explain.

As many people know, the genesis of Oracle APEX was an internal development project that began in 1999, to build a Web-based HTML calendar for use by Oracle employees.  My manager, Mike Hichwa, was the inventor of Oracle Web DB.  And when faced with the assignment of creating a new HTML calendar application for the company, the choices were a) WebDB, b) a lovingly hand-crafted PL/SQL application from scratch, or c) a yet-to-be-created application metadata framework (using Mike's lessons learned from WebDB).  We went with the latter, and Mike began the creation of the APEX framework while I developed a calendar application which was "programmed" to use this new framework.  Mocked and doubted by many at Oracle, we went live with the first production application in 3 months, rolled out to thousands of employees.  Having Tom Kyte to help us was instrumental in our success.

Over the next 18 months, we evolved this framework and created a number of other internal applications.  We thought we were ready to offer this framework for customers to use.  But one of the best things happened for APEX at that time.  When Larry Ellison was visiting New York City, Mike traveled to meet with him and brief him on the state of the framework, as well as Mike's aspirations to offer this framework as another tool from Oracle.  The advice offered by Larry to Mike - prove the framework with 30 real-world customers before you consider taking this live.  Invaluable guidance.

In 2001, Mike and I had an internal meeting in Chicago with Oracle Consulting.  The back-end information system for the Chicago Police Department (CPD), the Criminal History Record Information System (CHRIS), was written in Oracle Forms.  It had been developed over many years, and was a joint effort between Oracle Consulting and the Chicago Police Department.  The purpose of this meeting, at the time, was to discuss possible alternatives to the next state of CHRIS.  This meeting was ultimately precipitated by the estimated hardware requirements to run the next version of Oracle Forms.  They had estimated that the backend database server requirements alone would require 4 very large and very expensive new Sun Enterprise 10000 servers.  This was a lot of money to be spent on hardware with effectively no net gain in functionality for their end users.  We proposed APEX ("Flows", at the time), and they went with it.

Over a period of more than a year, a number of today's APEX product development team members worked directly, onsite, with Oracle Consulting and Chicago Police Department to move the functionality of their Oracle Forms applications to APEX.  It wasn't a 1-to-1 mapping, and it required a level of application and UI redesign.  But we were able to capitalize on the existing data structures and business logic, already present in the database.  The Oracle Forms applications and APEX apps were able to easily co-exist, because they were built on the same foundation.  There were also new systems developed as part of this effort, named CLEAR.  You can still read about CLEAR from this article from 2004.

This entire exercise was hugely beneficial to us.  We thought we were ready to go to market.  But once we dug into the requirements of a large-scale enterprise system like CHRIS, it uncovered many significant gaps in the functionality of the APEX framework.  Fortunately, we owned the framework and were able to simultaneously fill those functional gaps in APEX.  As a simple example, at the time there was no way to manage vectors of information in APEX session state.  This real-world requirement resulted in today's APEX collections.  When you own the framework and you are concurrently building the app, you can do anything!

Scalability was another concern.  While the original calendar application we wrote for Oracle had more than 25,000 users, let's face it - the use of a calendar is occasional throughout the day.  Contrast this with CHRIS, which had more than 10,000 total users, the vast majority who would interact with CHRIS frequently throughout the day.  The heavy concurrent usage of these applications provided us numerous opportunities to tune and optimize the APEX execution engine.  And talk about mission-critical applications - "business" slows to a crawl if you can't look up information about a person or log evidence.  And when business slows to a crawl, public safety is jeopardized.

Fast forward to 2019, and here we are with a large global community of hundreds of thousands of developers.  There are dedicated conferences, stickers, bloggers, videos, meetup groups, awards, books, podcasts, webinars, hosting providers, cloud services, partners & consulting companies, and thousands upon thousands of real-world successes from around the globe.  Much of our success can be traced to this proving ground, which was afforded us by the Chicago Police Department.

The purpose of this blog post is simple - I wish to offer my personal, sincere thanks to the Chicago Police Department for the gamble they took on us.  There was no true guarantee that APEX was going to exist beyond a "skunkworks" project, but they still forged ahead, given some assurances from Oracle and the alternatives.  They banked on us and they won.  Their real-world use cases stretched us and the technology in ways we had never imagined.  We learned so many valuable lessons during this project, and all of it resulted in a much more scalable, hardened, proven system by the time APEX was first offered as an Oracle Database feature in 2004.  We will forever be grateful to them.

For the record, these internal systems still run on Oracle APEX today, and are used by thousands of Chicago Police Department employees every day.  Now that is longevity, and a great investment.  Amidst today's rapid technology churn, this remains an extraordinary success story.


Patch by City of Chicago - http://www.publicsafetypatches.org/IL/Police/, Public Domain, Link

Global non partitioned index on table partitions

Tom Kyte - Fri, 2019-11-01 18:47
Hi, I have recently got some sql statements that is not performing well. <code>select * from v where a=? and b not in(,,,,....) and c =? and rownum<-100 </code> where v is a view. Original sql statement is similar to above statement. From explai...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator