Feed aggregator

Critical Patch Update for April 2018 Now Available

Steven Chan - Tue, 2018-04-17 22:22

The Critical Patch Update (CPU) for April 2018 was released on April 17, 2018. Oracle strongly recommends applying the patches as soon as possible.

The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents. 

Supported products that are not listed in the "Supported Products and Components Affected" Section of the advisory do not require new patches to be applied.

The Critical Patch Update Advisory is available at the following location:

It is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches.

The next four Critical Patch Update release dates are:

  • July 17, 2018
  • October 16, 2018
  • January 15, 2019
  • April 16, 2019
References Related Articles
Categories: APPS Blogs

Skip Goldengate Replicat Transaction

Michael Dinh - Tue, 2018-04-17 19:22
Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.15 17640173 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_131101.0605.2_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Nov 19 2013 03:18:45

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.
====================================================================================================
ORA-02292: integrity constraint (OWNER.MARY_JOE_FK) violated - child record found (status = 2292). DELETE FROM "OWNER"."T_JOE"  WHERE "JOENUMMER" = :b0.
====================================================================================================

+++ SKIPTRANSACTION
GGSCI> start replicat REP1 SKIPTRANSACTION

+++ REVIEW PRM
[gguser]$ grep -i discard rep1.prm
--REPERROR (DEFAULT, DISCARD)
REPERROR (-1, DISCARD)
REPERROR (2291, DISCARD) 
DISCARDFILE ./discard/rep1.discard append, MEGABYTES 1024
DISCARDROLLOVER AT 00:01
[gguser]$ 

+++ REVIEW SKIPPING FROM DISCARD
[gguser]$ grep -c "Skipping delete from OWNER.T_JOE" rep1.discard
15276

[gguser]$ grep -A2 "Skipping delete from OWNER.T_JOE" ./discard/rep1.discard|head
Skipping delete from OWNER.T_JOE at seqno 4475 rba 87850906
*
JOENUMMER = 1
--
Skipping delete from OWNER.T_JOE at seqno 4475 rba 87851339
*
JOENUMMER = 2
--
Skipping delete from OWNER.T_JOE at seqno 4475 rba 87851735
*
[gguser@viz-cp-dc1-p11 oracle]$ grep -A2 "Skipping delete from OWNER.T_JOE" ./discard/rep1.discard|tail
*
JOENUMMER = 50093291
--
Skipping delete from OWNER.T_JOE at seqno 4475 rba 94033367
*
JOENUMMER = 50094681
--
Skipping delete from OWNER.T_JOE at seqno 4475 rba 94033767
*
JOENUMMER = 50094741

+++ REVIEW RBA FROM DISCARD
[gguser]$ grep rba rep1.discard|head -1
Aborting transaction on ./dirdat/nd beginning at seqno 4475 rba 87850906

[gguser]$ grep rba rep1.discard|tail -1
Skipping delete from OWNER.T_JOE at seqno 4475 rba 94033767
[gguser]$ 

+++ NOTICE MATCH WITH LOGDUMP
Logdump 23 >scanforendtrans
End of Transaction found at RBA 94033767 

====================================================================================================
GATHER DATA
====================================================================================================

GGATE@SQL> r
1 select count(*) from
2 (
3 (select JOENUMMER from OWNER.T_JOE minus select JOENUMMER from OWNER.T_JOE@dblink)
4 union all
5 (select JOENUMMER from OWNER.T_JOE@dblink minus select JOENUMMER from OWNER.T_JOE)
6 )
7*

COUNT(*)
———-
15273

GGATE@SQL>

+++ CREATE TEMPORARY TABLE

GGATE@SQL> create table T_JOE_DEL as select JOENUMMER from OWNER.T_JOE minus select JOENUMMER from OWNER.T_JOE@dblink;

+++ REVIEW DATA FROM TEMPORARY TABLE TO COMPARE WITH DISCARD

GGATE@SQL> r
1 select * from (
2 select JOENUMMER from T_JOE_DEL order by 1 asc
3* ) where rownum <11

JOE
————
1
2
3
21
23
24
25
26
27
28

10 rows selected.

GGATE@SQL>

GGATE@SQL> r
1 select * from (
2 select JOE from T_JOE_DEL order by 1 desc
3* ) where rownum <11

JOE
————
50094741
50094681
50093291
50093221
50093191
50093101
50092851
50092791
50092781
50092741

10 rows selected.

GGATE@SQL>

====================================================================================================
CORRECT DATA
====================================================================================================

GGATE@SQL> delete from OWNER.T_JOE where JOENUMMER in (select JOENUMMER from T_JOE_DEL);

15273 rows deleted.

GGATE@SQL> commit;

Commit complete.

====================================================================================================
VERIFY ROW COUNT
====================================================================================================

+++ USING COUNT MAY NOT BE THE BEST OPTION.
GGATE@SQL> select count(*) from OWNER.T_JOE;

  COUNT(*)
----------
      9939

GGATE@SQL> select count(*) from OWNER.T_JOE@dblink;

  COUNT(*)
----------
      9939

GGATE@SQL> 

====================================================================================================
REVIEW REPORT FILE
====================================================================================================

[gguser]$ grep SKIPTRANSACTION REP1*.rpt
rep1.rpt:2018-04-17 12:15:15  INFO    OGG-01370  User requested START SKIPTRANSACTION. The current transaction will be skipped. Transaction ID 22.30.1923599, position Seqno 4475, RBA 87850906.

[gguser]$ grep -i skip ggserr.log
2018-04-17 12:15:14  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (gguser): start replicat rep1 SKIPTRANSACTION.
2018-04-17 12:15:14  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host 10.232.135.44:33310 (START REPLICAT rep1 SKIPTRANSACTION).
2018-04-17 12:15:15  INFO    OGG-01370  Oracle GoldenGate Delivery for Oracle, rep1.prm:  User requested START SKIPTRANSACTION. The current transaction will be skipped. Transaction ID 22.30.1923599, position Seqno 4475, RBA 87850906.
[gguser]$ 

====================================================================================================
LOGDUMP TO FIND END OF TRANSACTONS
====================================================================================================

Logdump 15 >open ./dirdat/nd004475
Current LogTrail is ./dirdat/nd004475 
Logdump 16 >detail on
Logdump 17 >fileheader detail
Logdump 18 >ghdr on
Logdump 19 >detail data
Logdump 20 >ggstoken detail
Logdump 21 >pos 87850906
Reading forward from RBA 87850906 
Logdump 22 >n
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     B  (x42)  
RecLength  :   310  (x0136)   IO Time    : 2018/04/17 10:47:16.475.512   
IOType     :     3  (x03)     OrigNode   :   255  (xff) 
TransInd   :     .  (x00)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :     167409       AuditPos   : 779280 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2018/04/17 10:47:16.475.512 Delete               Len   310 RBA 87850906 
Name: OWNER.T_JOE 
Before Image:                                             Partition 4   G  b   

GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4148 6b55 4141 5441 4141 6264 7141 4159 0001 | AAAHkUAATAAAbdqAAY..  
TokenID x4c 'L' LOGCSN           Info x00  Length   10 
 3732 3833 3730 3834 3135                          | 7283708415  
TokenID x36 '6' TRANID           Info x00  Length   13 
 3232 2e33 302e 3139 3233 3539 39                  | 22.30.1923599  
   
Logdump 23 >scanforendtrans
End of Transaction found at RBA 94033767 
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     B  (x42)  
RecLength  :   331  (x014b)   IO Time    : 2018/04/17 10:47:16.429.234   
IOType     :     3  (x03)     OrigNode   :   255  (xff) 
TransInd   :     .  (x02)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :     167409       AuditPos   : 13903264 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2018/04/17 10:47:16.429.234 Delete               Len   331 RBA 94033767 
Name: OWNER.T_JOE 
Before Image:                                             Partition 4   G  e   
GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4148 6b55 4141 5741 4141 4e6c 6c41 4177 0001 | AAAHkUAAWAAANllAAw..  
   
Logdump 24 >n
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     B  (x42)  
RecLength  :   174  (x00ae)   IO Time    : 2018/04/17 10:47:24.429.491   
IOType     :    15  (x0f)     OrigNode   :   255  (xff) 
TransInd   :     .  (x00)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :     167409       AuditPos   : 13947088 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2018/04/17 10:47:24.429.491 FieldComp            Len   174 RBA 94034190 
Name: OWNER.NEW_DATA 
Before Image:                                             Partition 4   G  b   
GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4148 6a59 4141 5441 4142 794b 4541 412f 0001 | AAAHjYAATAAByKEAA/..  
TokenID x4c 'L' LOGCSN           Info x00  Length   10 
 3732 3833 3730 3834 3538                          | 7283708458  
TokenID x36 '6' TRANID           Info x00  Length   13 
 3132 2e31 362e 3330 3031 3139 37                  | 12.16.3001197  
   
Logdump 25 >open ./dirdat/nd004475
Current LogTrail is ./dirdat/nd004475 
Logdump 26 >count
LogTrail ./dirdat/nd004475 has 92822 records 
Total Data Bytes          92730182 
  Avg Bytes/Record             999 
Delete                       20937 
Insert                        5405 
FieldComp                      724 
LargeObject                  65755 
Others                           1 
Before Images                21163 
After Images                 71658 

Average of 1589 Transactions 
    Bytes/Trans .....      61161 
    Records/Trans ...         58 
    Files/Trans .....          5 
 
Logdump 27 >detail on
Logdump 28 >filter inc filename OWNER.T_JOE
Logdump 29 >count
Scanned     10000 records, RBA   12734577, 2018/04/17 07:25:42.524.558 
Scanned     20000 records, RBA   25670230, 2018/04/17 08:00:11.480.213 
Scanned     30000 records, RBA   38698934, 2018/04/17 08:30:24.488.669 
Scanned     40000 records, RBA   51436567, 2018/04/17 08:59:11.452.549 
Scanned     50000 records, RBA   63868041, 2018/04/17 09:43:10.477.605 
Scanned     60000 records, RBA   76010927, 2018/04/17 10:14:59.472.122 
Scanned     70000 records, RBA   94264594, 2018/04/17 10:47:31.447.436 
LogTrail ./dirdat/nd004475 has 15296 records 
Total Data Bytes           4757365 
  Avg Bytes/Record             311 
Delete                       15296 
Before Images                15296 
Filtering matched        15296 records 
          suppressed     77526 records 

Average of 2 Transactions 
    Bytes/Trans .....    2745786 
    Records/Trans ...       7648 
    Files/Trans .....        110 
 

OWNER.T_JOE                                      Partition 4 
Total Data Bytes           4757365 
  Avg Bytes/Record             311 
Delete                       15296 
Before Images                15296 
Logdump 30 >

 

Best way to index uuid

Tom Kyte - Tue, 2018-04-17 16:06
Hello, What is the best way to index uud if I only do equal comparaison on it ? I gess that Hash index is better but I'm not sure. Regards Stephane GINER
Categories: DBA Blogs

Order by at runtime

Tom Kyte - Tue, 2018-04-17 16:06
Hello, we have some huge tables to query, and with order by clause (must be used) it takes a very long time for a query to be done. as I know that we can do the order by at run time using dynamic SQL, but my questions are: 1. do we have any o...
Categories: DBA Blogs

Automatic list partitioning

Tom Kyte - Tue, 2018-04-17 16:06
Hi Tom! I use Oracle 12c version. I have partitioned by list table. How can I change non automatic partitioning to automatic? Thank you!
Categories: DBA Blogs

how to generate .dsv files using SQL script?

Tom Kyte - Tue, 2018-04-17 16:06
we have around 100 table out of 200, in which there is a column date. what we want is, first we want to chagen the <b>NLS_date_format to DD-MON-YYYY HH12:MI:SS AM</b>(using script) then save the tables with date in a .DSV files. also n...
Categories: DBA Blogs

April 2018 Critical Patch Update Released

Oracle Security Team - Tue, 2018-04-17 14:57

Oracle today released the April 2018 Critical Patch Update.

This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, Oracle Fusion Middleware, Oracle E-Business Suite, Oracle PeopleSoft, Oracle Industry Applications (Construction, Financial Services, Hospitality, Retail, Utilities), Oracle Java SE, and Oracle Systems Products Suite.

Approximately 35% of the security fixes provided by this Critical Patch Update are for non-Oracle Common Vulnerabilities and Exposures (CVEs): that is, security fixes for third-party products (e.g., open source components) that are included in traditional Oracle product distributions.  In many instances, the same CVE is listed multiple times in the Critical Patch Update Advisory, because a vulnerable common component (e.g., Apache) may be present in many different Oracle products.

Note that Oracle started releasing security updates in response to the Spectre (CVE-2017-5715 and CVE-2017-5753) and Meltdown (CVE-2017-5754) processor vulnerabilities with the January 2018 Critical Patch Update.  Customers should refer to this Advisory and the “Addendum to the January 2018 Critical Patch Update Advisory for Spectre and Meltdown” My Oracle Support note (Doc ID 2347948.1) for information about newly-released updates. At this point in time, Oracle has issued the corresponding security patches for Oracle Linux and Virtualization and Oracle Solaris on SPARC (SPARC 64-bit systems are not affected by Meltdown), and Oracle is working on producing the necessary updates for Solaris on x86 (noting the diversity of supported processors complicates the creation of the security patches related to these issues).

For more information about this Critical Patch Update, customers should refer to the Critical Patch Update Advisory and the executive summary published on My Oracle Support (Doc ID 2383583.1).   

Docker: How to build you own container with your own application

Dietrich Schroff - Tue, 2018-04-17 12:17
atThere are many tutorials out there, how to create a docker container with a apache webserver inside or a nginx.
But you can hardly find a manual how to build your own docker container without pulling everything from a foreign repository.
Why should you not pull everything from foreign repositories?

You should read this article or this:
But since each phase of the development pipeline is built at a different time, …
…you can’t be sure that the same version of each dependency in the development version also got into your production version.
That is a good point.

As considered in this article you can put some more constraints into your docker file: 
FROM ubuntu:14.04. or even
FROM ubuntu:0bf3461984f2fb18d237995e81faa657aff260a52a795367e6725f0617f7a56cAnd that is the point where i tell you: Create a process to build your own docker containers from scratch and distribute them with your own repository or copy them to all your docker nodes (s. here)

So here the steps to create your own container from a local directory (here ncweb):

# ls -l ncweb/
total 12
-rw-r--r--    1 root     root            90 Nov 26 10:06 Dockerfile
-rw-r--r--    1 root     root           255 Nov 26 11:29 index.html
-rw-r--r--    1 root     root             0 Nov 26 11:29 logfile
-rwxr--r--    1 root     root           176 Nov 26 11:29 ncweb.sh 
The Dockerfile contains the following:

# ls -l ncweb/
alpine:~# cat ncweb/Dockerfile
FROM alpine
WORKDIR /tmp
RUN mkdir ncweb
ADD .  /tmp
ENTRYPOINT [ "/tmp/ncweb.sh" ]
Into this directory you have to put everything you need, e.g. a complete JDK or your binaries or ...

And then change into this directory and build your container:

ncweb# docker build -t ncweb:0.2 .
The distribution to other docker nodes can be done like this:

# docker save ncweb:0.3 | ssh 192.168.178.47 docker load 
For more details read this posting.


Related posts:



Announcing GraalVM: Run Programs Faster Anywhere

OTN TechBlog - Tue, 2018-04-17 02:47

Current production virtual machines (VMs) provide high performance execution of programs only for a specific language or a very small set of languages. Compilation, memory management, and tooling are maintained separately for different languages, violating the ‘don’t repeat yourself’ (DRY) principle. This leads not only to a larger burden for the VM implementers, but also for developers due to inconsistent performance characteristics, tooling, and configuration. Furthermore, communication between programs written in different languages requires costly serialization and deserialization logic. Finally, high performance VMs are heavyweight processes with high memory footprint and difficult to embed.

Several years ago, to address these shortcomings, Oracle Labs started a new research project for exploring a novel architecture for virtual machines. Our vision was to create a single VM that would provide high performance for all programming languages, therefore facilitating communication between programs. This architecture would support unified language-agnostic tooling for better maintainability and its embeddability would make the VM ubiquitous across the stack.

To meet this goal, we have invented a new approach for building such a VM. After years of extensive research and development, we are now ready to present the first production-ready release.

Introducing GraalVM

Today, we are pleased to announce the 1.0 release of GraalVM, a universal virtual machine designed for a polyglot world.

GraalVM provides high performance for individual languages and interoperability with zero performance overhead for creating polyglot applications. Instead of converting data structures at language boundaries, GraalVM allows objects and arrays to be used directly by foreign languages.

Example scenarios include accessing functionality of a Java library from Node.js code, calling a Python statistical routine from Java, or using R to create a complex SVG plot from data managed by another language. With GraalVM, programmers are free to use whatever language they think is most productive to solve the current task.

GraalVM 1.0 allows you to run:

- JVM-based languages like Java, Scala, Groovy, or Kotlin
- JavaScript (including Node.js)
- LLVM bitcode (created from programs written in e.g. C, C++, or Rust)
- Experimental versions of Ruby, R, and Python

GraalVM can either run standalone, embedded as part of platforms like OpenJDK or Node.js, or even embedded inside databases such as MySQL or the Oracle RDBMS. Applications can be deployed flexibly across the stack via the standardized GraalVM execution environments. In the case of data processing engines, GraalVM directly exposes the data stored in custom formats to the running program without any conversion overhead.

For JVM-based languages, GraalVM offers a mechanism to create precompiled native images with instant start up and low memory footprint. The image generation process runs a static analysis to find any code reachable from the main Java method and then performs a full ahead-of-time (AOT) compilation. The resulting native binary contains the whole program in machine code form for immediate execution. It can be linked with other native programs and can optionally include the GraalVM compiler for complementary just-in-time (JIT) compilation support to run any GraalVM-based language with high performance.

A major advantage of the GraalVM ecosystem is language-agnostic tooling that is applicable in all GraalVM deployments. The core GraalVM installation provides a language-agnostic debugger, profiler, and heap viewer. We invite third-party tool developers and language developers to enrich the GraalVM ecosystem using the instrumentation API or the language-implementation API. We envision GraalVM as a language-level virtualization layer that allows leveraging tools and embeddings across all languages.

GraalVM in Production

Twitter is one of the companies deploying GraalVM in production already today for executing their Scala-based microservices. The aggressive optimizations of the GraalVM compiler reduces object allocations and improves overall execution speed. This results in fewer garbage collection pauses and less computing power necessary for running the platform. See this presentation from a Twitter JVM Engineer describing their experiences in detail and how they are using the GraalVM compiler to save money. In the current 1.0 release, we recommend JVM-based languages and JavaScript (including Node.js) for production use while R, Ruby, Python and LLVM-based languages are still experimental.

Getting Started

The binary of the GraalVM v1.0 (release candidate) Community Edition (CE) built from the GraalVM open source repository on GitHub is available here.

We are looking for feedback from the community for this release candidate. We welcome feedback in the form of GitHub issues or GitHub pull requests.

In addition to the GraalVM CE, we also provide the GraalVM v1.0 (release candidate) Enterprise Edition (EE) for better security, scalability and performance in production environments. GraalVM EE is available on Oracle Cloud Infrastructure and can be downloaded from the Oracle Technology Network for evaluation. For production use of GraalVM EE, please contact graalvm-enterprise_grp_ww@oracle.com.

Stay Connected

The latest up-to-date downloads and documentation can be found at www.graalvm.org. Follow our daily development, request enhancements, or report issues via our GitHub repository at www.github.com/oracle/graal. We encourage you to subscribe to these GraalVM mailing lists:

- graalvm-announce@oss.oracle.com
- graalvm-users@oss.oracle.com
- graalvm-dev@oss.oracle.com

We communicate via the @graalvm alias on Twitter and watch for any tweet or Stack Overflow question with the #GraalVM hash tag.

Future

This first release is only the beginning. We are working on improving all aspects of GraalVM; in particular the support for Python, R and Ruby.

GraalVM is an open ecosystem and we encourage building your own languages or tools on top of it. We want to make GraalVM a collaborative project enabling standardized language execution and a rich set of language-agnostic tooling. Please find more at www.graalvm.org on how to:

- allow your own language to run on GraalVM
- build language-agnostic tools for GraalVM
- embed GraalVM in your own application

We look forward to building this next generation technology for a polyglot world together with you!

Julian Date Full Explanation

Tom Kyte - Mon, 2018-04-16 21:46
Hello, I'm fairly new, but I have been finding bits and pieces on Julian date conversion, but not a full explanation of the Julian date conversion? <b>I.E TO_NUMBER(TO_CHAR(SYSDATE, 'YYYYDDD'))-1900000</b> Firstly, the SYSDATE is using the T...
Categories: DBA Blogs

Help needed with match_recognize

Tom Kyte - Mon, 2018-04-16 21:46
Dear Mr. Tom, Thank you for all your help and time in supporting our requests. I have some issues with MATCH_RECOGNIZE Oracle Version - 12.1.0.2.0 OS - REDHAT Linux <code>CREATE TABLE test_match_recognize(employment_id NUMBER (10, 0) NOT N...
Categories: DBA Blogs

How to remove multiple word occurance from an input string using oracle PL/SQL

Tom Kyte - Mon, 2018-04-16 21:46
Remove duplicate words from a address using oracle pl/sql: There are two types of addresses will be there, below is the example 1. '3 Mayers Court 3 Mayers Court' : where total no of words in address is even and either all words/combination of ...
Categories: DBA Blogs

merge and dbms_errlog behaviour with ORA-30926

Tom Kyte - Mon, 2018-04-16 21:46
Hi all, I have a merge statement that sometimes fails when the source table has duplicated merge keys. To save time I tried to use dbms_errlog package and let it save the coulript rows, without failing the statement itself. The error I get befor...
Categories: DBA Blogs

2018.pgconf.de, recap

Yann Neuhaus - Mon, 2018-04-16 11:43

Finally I am home from pgconf.de in Berlin at the beautiful Müggelsee. Beside meeting core PostreSQL people such Devrim and Bruce, Andreas and joining Jan again for great discussions and some beers, joking with Anja, being at the dbi services booth, discussing with people, kidding with Hans: was it worth the effort? Yes, it was, and here is why.

Selection_022

We had very interesting discussions at our booth, ranging from migrations to PostgreSQL, PostgreSQL training corporations and interest in our OpenDB appliance.
DapJvl5XkAAxinQ

The opening session “Umdenken! 11 Gebote zum IT-Management” raised a question we do always ask our selfs as well: When you do HA how much complexity does the HA layer add? Maybe it is the HA layer that was causing the outage and that would not have happened without that? Reducing complexity is key to robust and reliable IT operations.

Listening to Bruce Momjian is always a joy: This time it was about PostgreSQL sharding. Much is already in place, some will come with PostgreSQL 11 and other stuff is being worked on for PostgreSQL 12 next year. Just check the slides which should be available for download from the website soon.

Most important: The increasing interest in PostgreSQL. We can see that at our customers, at conferences and in the interest in our blog posts about that topic. Sadly, when you have a booth, you are not able to listen to all the talks you would like to. This is the downside :(

So, mark your calendar: Next years date and location are already fixed: May 10, 2019, in Leipzip. I am sure we will have some updates to:

large

 

Cet article 2018.pgconf.de, recap est apparu en premier sur Blog dbi services.

The Intelligent Chatbot to Customer Service Agent Hand-Off within Zendesk

Chatbots are on the rise. By 2020, over 80% of businesses are expected to implement some type of chatbot automation (Business Insider, 2016). This type of automation is inevitable due to the amount of time and money that chatbots can save a business. However, especially in the early days of the chatbot revolution, a bot will not be able to solve all the problems that a human can. One specific use case for chatbots that we have examined is customer support. Customer support bots can reduce the workload of support staff by a great deal, but some customers will not find the support they need with a bot. Wouldn’t it be great if a customer could seamlessly go from talking to a bot to a live person in the same interface? That is exactly what we created at Fishbowl and you can see in this video.

Starting a conversation with this bot begins using Oracle’s chatbot framework, a feature of Oracle Mobile Cloud Service, much like the rest of our bots. It has the capability do all the integrations that our other bots have with systems such as Salesforce, Oracle Engagement Cloud, and Zendesk Software and Support ticketing system. However, this bot has the ability to connect to Zendesk’s live chat service for more personal support from a live agent. Using the bot, information is collected to be passed to the live agent, so that the live agent can know what was already asked and can waste no time in helping the customer.

To move from a bot conversation to a live chat conversation and back again, customizations had to be made to our web client. Since the live chat feature in Oracle’s bot framework is still a work in progress, the best solution was to stop sending messages to the bot after the user goes through the “connect to a live agent” chat flow. Instead, the web client sends messages directly to Zendesk and receives them in turn. Once the conversation has concluded, the bot returns to normal and talks to the bot framework once again.

Customer service is a critical component of the overall customer experience, and getting customers answers to common questions can go a long way to ensure brand loyalty. Some stats suggest that 80% of routine questions can be answered by a chatbot, but when an agent is needed it is important to provide a seamless handoff while providing the agent with context to immediately begin servicing the customer. If integrated correctly, chatbots and customer service/support representatives (agents) can together improve the customer service experience.

You can see more of the intelligent chatbots Fishbowl has created using Oracle Mobile Cloud here: https://www.fishbowlsolutions.com/oracle-intelligent-chatbot-cloud-service-consulting/

The post The Intelligent Chatbot to Customer Service Agent Hand-Off within Zendesk appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

How to trap DDL Activities and get the Sql text of such statements

Tom Kyte - Mon, 2018-04-16 03:26
Hello Sir, I am quite surprised to see that Today urs ite is not blocked for me. Sir ,I've an immediate requirement to trap all the activities fired in the Databse(DML as well As DDls). As Auditing is not supportive for this purpose. I need the...
Categories: DBA Blogs

How to install the Oracle Integration Cloud on premises connectivity agent (18.1.3)

Amis Blog - Mon, 2018-04-16 02:00
Recapitulation on how to install the Oracle Integration Cloud on premises connectivity agent

Recently (april 2018) I gained access to the new Oracle Integration Cloud (OIC), version 18.1.3.180112.1616-762,  and wanted to make an integration connection to an on-premise database. For this purpose, an on premise connectivity agent needs to be installed, as is thoroughly explained by my colleague Robert van Mölken in his blog prepraring-to-use-the-ics-on-premises-connectivity-agent.

With the (new) Oracle Integration Cloud environment the installation of the connectivity agent has slightly changed though, as shown below. It gave me some effort to get the new connectivity agent working. Therefore I decided to recapture the steps needed in this blog. Hopefully, this will give you a headstart to get the connectivity agent up and running.

LeftMenuPane

MenuBar Prerequisites

Access to an Oracle Integration Cloud Service instance.

Rights to do some installation on a local / on-premise environment, Linux based (eg. SOA virtual box appliance).

 

Agent groups

For connection purposes you need to have an agent group defined in the Oracle Integration Cloud.

To define an agent group, you need to select the agents option in the left menu pane.  You can find any already existing agent groups here as well.

Select the ‘create agent group’ button to define a new agent group and fill in this tiny web form.

DefineAgentGroup

Downloading and extracting the connectivity agent

For downloading the connectivity agent software you also need to select the agents option in the left menu pane, followed by the download option in the upper menu bar.

After downloading you have a file called ‘oic_connectivity_agent.zip’, which takes 145.903.548 bytes

This has a much smaller memory footprint than the former connectivity agent software (ics_conn_agent_installer_180111.0000.1050.zip, which takes 1.867.789.797 bytes).

For installation of the connectivity agent, you need to copy and extract the file to an installation folder of your choice on the on-premise host.

After extraction you see several files, amongst which ‘InstallerProfile.cfg’.

oic-content(1)

Setting configuration properties

Before starting the installation you need to edit the content of the file InstallerProfile.cfg.

Set the value for the property OIC_URL to the right hostname and sslPort *.

Also set the value for the property agent_GROUP_IDENTIFIER to the name of the agent group  you want the agent to belong to.

After filling in these properties save the file.

InstallerProfile

* On the instance details page you can see the right values for the hostname and sslPort. This is the page which shows you the weblogic instances that host your OIC and it looks something like this:

ServiceCapture Certificates

For my trial purpose I didn’t need a certificate to communicate between the OIC and the on-premise environment.

But if you do, you can follow the next 2 steps:

oic-content(2)

a. Go to the agenthome/agent/cert/ directory.

b. Run the following command: keytool -importcert -keystore keystore.jks -storepass changeit -keypass password -alias alias_name  -noprompt -file certificate_file

 

Java JDK

Before starting the installation of the connectivity agent, make sure your JAVA JDK is at least version 8, with the JAVA_HOME and PATH set.

To check this, open a terminal window and type: ‘java –version’ (without the quotes)

You should see the version of the installed java version, eg. java version “1.8.0_131”.

To add the JAVA_HOME to the PATH setting, type ‘setenv PATH = $JAVA_HOME/bin:$PATH’ (without the quotes)

Running the installer

You can start the connectivity agent installer with the command: ‘java –jar connectivityagent.jar’  (again, without the quotes).

During the installation you are for your OIC username and corresponding password.

The installation finishes with a message that the agent was installed succesfully en is now up and running.

Check the installed agent

You can check that the agent is communicating to/under/with the agent group you specified.

Behind the name of the agent group the number of agents communicating within it is shown

AgentGroupShowingAgentCapture

The post How to install the Oracle Integration Cloud on premises connectivity agent (18.1.3) appeared first on AMIS Oracle and Java Blog.

Oracle text index super slow

Tom Kyte - Sun, 2018-04-15 09:06
Hi Tom I am not an expert in oracle so thought I will use your help here. I have an application which does a full text search but it is very very slow. Not sure if I build the index correctly. Below are the details: BELOW IS USED TO CREATE INDE...
Categories: DBA Blogs

schedule the PL SQL query and save in .csv file

Tom Kyte - Sun, 2018-04-15 09:06
>Hi Tom, I need to schedule a sql script and run everyday on 10 am. But I don't know how. I have search some same topic but I still don't understand. Here's the sql that I have been use to queries my data and I export it manually in <b>.csv</b...
Categories: DBA Blogs

Missing values using pipelined functions and refcursor

Tom Kyte - Sun, 2018-04-15 09:06
Hi, We are using a several pipelined functions to return values to an AIP based on plan number paramerer (table type). The result of these functions are combined (using table(function)) and returned in an open refcursor to the webservices (API). ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator