Feed aggregator

UKOUG “Lifetime Achievement Award” Speaker Award

Richard Foote - Tue, 2018-12-18 17:32
I was recently very honoured and flattered to have received the “Lifetime Achievement Award” Speaker Award from the UKOUG. I have only managed to get to the excellent UKOUG Tech conferences on a couple of occasions, so it was both a thrill and a surprise to be so honoured. Unfortunately, I wasn’t able to make […]
Categories: DBA Blogs

Database Design for Invoices Table with Many Null Columns

Tom Kyte - Tue, 2018-12-18 16:26
Hello, Ask Tom Team. I have a 70 columns table storing valid invoices. There are cases where the invoices are rejected because business rules. When the invoices are rejected just a few columns are filled (up to 5), then all 65 remaining columns ar...
Categories: DBA Blogs

Parallel execution of Select .. for update with Insert

Tom Kyte - Tue, 2018-12-18 16:26
Hi Tom We have a stored proc something like below: <code>Proc P1 (p_id in varchar2) is cursor c1 is select a,b,c from t1, t2 where t1.pk = t2.fk and t1.id = p_id ; v_id varchar2(10); Begin Open c1 (p_id); fetch c1 into v_...
Categories: DBA Blogs

how to equate substring of one table column to a value in a list from another table column as part of an exists?

Tom Kyte - Tue, 2018-12-18 16:26
Hello Oracle Masters, I was looking at 'connect by level' (https://asktom.oracle.com/pls/apex/asktom.search?tag=clever-sql) hoping I could make use of it, but don't seem to be able to. Perhaps there is another way? We have a table of products ...
Categories: DBA Blogs

How to extract distinct group of associated records (not exactly hierarchical in nature)

Tom Kyte - Tue, 2018-12-18 16:26
Hello All, It would be greatly appreciated if someone can provide a way out on how to extract distinct group/array of associated (they are not of parent-child relationship or hierarchical) records as shown below. The same table has been created wi...
Categories: DBA Blogs

ORA-01779: cannot modify a column which maps to a non key-preserved table

Tom Kyte - Tue, 2018-12-18 16:26
This is on checking below link - https://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:273215737113#followup-3016346200346884929 Below answer given by Tom - "Consider, if T had: <code>OBJECT_NAME OBJECT_ID -------------- ----...
Categories: DBA Blogs

How do I Declare/Define a date field in a query

Tom Kyte - Tue, 2018-12-18 16:26
When i was working with MSSQL I could DECLARE a start & End date (or other parameters) to avoid having to change a variable in the query itself. This is what the table looks like -------------------- ------ ------ DateTime Action User --------...
Categories: DBA Blogs

Deploying SQL Server on K8s with Helm charts

Yann Neuhaus - Tue, 2018-12-18 10:06

During the last DockerCon EU in Barcelona, I heard a lot about Helm with K8s architectures. It was also a good opportunity to write about it after attending to this conference.

blog 149 - 0 - banner

In a nutshell, Helm is a package manager for K8s and you may think of it like the other ones available on the Linux side with apt, yum or zypper to cite few of them. Helm charts are a key component of the Helm architecture and make deployments easy, standardized and reusable and this is definitely what I was looking for our current CI/CD pipeline implementation for DMK maintenance tool.

Helm matters for enterprise-scale deployments by addressing common challenges with the following (non-exhaustive) list of capabilities

  • Helm charts can be shared easily across the enterprise or with contributors over the world from GitHub repository.
  • Using helm charts allow to get quickly a specific environment for testing
  • Existing charts can be authored for specific deployments regarding the context
  • The easy deployment and deletion of applications make the Helm adoption easier
  • Production- ready packages are possible and eliminate deployment errors due to incorrect configuration files and reduce the complexity of maintaining application catalog

In my case, it’s been a while since I have in mind to simplify my first SQL Server container deployments on K8s with a complex YAML file including a lot of objects like services, pods, secrets and persistent volumes with Helm charts. One additional motivation was the capability to change in-flight some preconfigured settings in the deployment when I wanted to switch from my minikube environment to my AKS cluster on Azure.

In this first write-up I used a custom dbi services image for SQL Server (a production-ready docker image) and I decided to use this image as based of my custom Helm chart. First of all, let’s say I didn’t start from scratch and I used the mssql-linux stable chart available of GitHub but obviously I customized it for my own requirements:

  • The custom dbi services image for SQL Server includes the creation of the flexible architecture and I had to update the persistence volume and claims configuration with this new storage map.
  • The custom image leverages the deployment of our DMK maintenance tool (optional) that includes different SQL objects to perform maintenance of customer databases (basically update stats, rebuild index and backup tasks). So, I needed to add a parameter to enable or not the deployment of this tool inside the pod.
  • TSQL scripts are also executed during the container startup and they apply different server level configuration, configure tempdb database files placement and add some trace flags to meet our best practices. But no real impact on the helm chart here.
  • An “application” user may be created (optional) and will be part of the db_creator server role according to the least privilege principle. In most cases we consider an application doesn’t need sysadmin privileges even on a SQL Server pod and more generally speaking on microservice architectures. So as already done for the DMK parameter described previously, I had to add another one parameter for creating this user when the pod is spin up.

Let’s first begin with my helm chart hierarchy folder which includes important files including Chart.yaml, values.yaml and deployment.yaml.

[dab@DBI-LT-DAB:#]> tree /f
…
T:.
│   .helmignore
│   Chart.yaml
│   values.yaml
│
├───charts
└───templates
        deployment.yaml
        NOTES.txt
        pvc-app.yaml
        pvc-backup.yaml
        pvc-data.yaml
        pvc-tempdb.yaml
        pvc-tranlog.yaml
        secret.yaml
        service.yaml
        _helpers.tpl

 

Let’s focus on the deployment.yaml file and the customized part within the spec.containers.env section related to my docker image specifications:

  • MSSQL_USER, MSSQL_USER_PASSWORD are environment variables related to my “application” user
  • DMK environment variable enables deployment of the DMK maintenance tool

In addition, the environment variables related to the database file placement have been customized for master, tempdb and user databases according to my flexible architecture specifications with:

  • /u00 (for application files)
  • /u01 (for user data and system database files)
  • /u02 (for transaction log files)
  • /u03 (for tempdb database files)
  • /u98 (for backup files).

MountPaths and persistent volume claims section have also been updated accordingly as shown below:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{ template "mssql.fullname" . }}
  labels:
    app: {{ template "mssql.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
{{- if .Values.deployment.annotations }}
  annotations:
{{ toYaml .Values.deployment.annotations | indent 4 }}
{{- end }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ template "mssql.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ template "mssql.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            - name: ACCEPT_EULA
              value: "{{ .Values.acceptEula.value | upper }}"
            - name: MSSQL_PID
              value: "{{ .Values.edition.value }}"
            - name: MSSQL_SA_PASSWORD
              valueFrom:
               secretKeyRef:
                 name: {{ template "mssql.fullname" . }}-sa-secret
                 key: sapassword
            - name: MSSQL_USER
              value: "{{ .Values.usersql.value }}"
            - name: MSSQL_USER_PASSWORD
              valueFrom:
               secretKeyRef:
                 name: {{ template "mssql.fullname" . }}-user-secret
                 key: userpassword
            - name: DMK
              value: "{{ .Values.DMK.value }}"
            - name: MSSQL_MASTER_DATA_FILE
              value: /u01/sqlserverdata/mssqlserver/master.mdf
            - name: MSSQL_MASTER_LOG_FILE
              value: /u01/sqlserverdata/mssqlserver/mastlog.ldf
            - name: MSSQL_DATA_DIR
              value: /u01/sqlserverdata/mssqlserver
            - name: MSSQL_LOG_DIR
              value: /u02/sqlserverlog/mssqlserver
            - name: MSSQL_TEMPDBDATA_DIR
              value: /u03/sqlservertempdb/mssqlserver
            - name: MSSQL_TEMPDBLOG_DIR
              value: /u03/sqlservertempdb/mssqlserver
            - name: MSSQL_BACKUP_DIR
              value: /u98/sqlserver/backup/mssqlserver
            - name: MSSQL_ERROR_LOG
              value: /u00/app/sqlserver/admin/mssqlserver/log
            - name: MSSQL_DUMP_DIR
              value: /u00/app/sqlserver/admin/mssqlserver/dump
            - name: MSSQL_TCP_PORT
              value: "{{ .Values.service.port.value }}"
            - name: MSSQL_LCID
              value: "{{ .Values.lcid.value }}"
            - name: MSSQL_COLLATION
              value: "{{ .Values.collation.value }}"
            - name: MSSQL_ENABLE_HADR
              value: "{{ .Values.hadr.value }}"
            {{ if .Values.resources.limits.memory }}
            - name: MSSQL_MEMORY_LIMIT_MB
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            {{ end }}
          ports:
            - name: mssql
              containerPort: {{ .Values.service.port.value }}
          volumeMounts:
            - name: data
              mountPath: /u01
            - name: transactionlog
              mountPath: /u02
            - name: tempdb
              mountPath: /u03
            - name: backup
              mountPath: /u98 
            - name: app
              mountPath: /u00
          livenessProbe:
            tcpSocket:
               port: mssql
            initialDelaySeconds: {{ .Values.livenessprobe.initialDelaySeconds }}
            periodSeconds: {{ .Values.livenessprobe.periodSeconds }}
          readinessProbe:
            tcpSocket:
               port: mssql
            initialDelaySeconds: {{ .Values.readinessprobe.initialDelaySeconds }}
            periodSeconds: {{ .Values.readinessprobe.periodSeconds }}
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- if .Values.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
    {{- end }}
      volumes:
      - name: master
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingMasterClaim }}
          claimName: {{ .Values.persistence.existingMasterClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-master
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: data
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingDataClaim }}
          claimName: {{ .Values.persistence.existingDataClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-data
          {{- end -}}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: transactionlog
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingTransactionLogClaim }}
          claimName: {{ .Values.persistence.existingTransactionLogClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-translog
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: tempdb
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingTempdbClaim }}
          claimName: {{ .Values.persistence.existingTempdbClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-tempdb
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: backup
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingBackupClaim }}
          claimName: {{ .Values.persistence.existingBackupClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-backup
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: app
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingApppClaim }}
          claimName: {{ .Values.persistence.existingAppClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-app
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}

 

Referring to my flexible architecture, I added 2 YAML files that contain the new persistent volumes definition for respectively pvc-app for /u00 (app) and pvc-tempdb for /u03 (tempdb).

Here the content of my persistent volume claim for tempdb for instance:

{{- if and .Values.persistence.enabled (not .Values.persistence.existingTempdbClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: {{ template "mssql.fullname" . }}-tempdb
  labels:
    app: {{ template "mssql.fullname" . }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
{{- if .Values.persistence.annotations }}
  annotations:
{{ toYaml .Values.persistence.annotations | indent 4 }}
{{- end }}
spec:
  accessModes:
    - {{ .Values.persistence.tempdbAccessMode | quote }}
  resources:
    requests:
      storage: {{ .Values.persistence.tempdbSize | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
  storageClassName: ""
{{- else }}
  storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

 

I added to the secret.yaml to include a section dedicated to my “application” user password

---
apiVersion: v1
kind: Secret
metadata:
  name: {{ template "mssql.fullname" . }}-user-secret
  labels:
    app: {{ template "mssql.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
type: Opaque
data:
  {{ if .Values.userpassword }}
  userpassword:  {{ .Values.userpassword.value | b64enc | quote }}
  {{ else }}
  userpassword: {{ randAlphaNum 20 | b64enc | quote }}
{{ end }}

 

Note the helm chart allows you to define your own password or if empty it will generate an random password instead.

Finally, the values.yaml file contains predefined values for my release deployment

# General parameters
acceptEula: 
  value: "Y"
edition: 
  value: "Developer"
DMK: 
  value: "N"
collation: 
  value: SQL_Latin1_General_CP1_CI_AS
lcid: 
  value: 1033
hadr: 
    value: 0
# User parameters
sapassword: 
  value: Password1
usersql: 
  value: dbi_user
userpassword: 
  value: Password2
# Image parameters
image:
  repository: dbi/mssql-server-linux
  tag: 2017-CU12
  pullPolicy: IfNotPresent
# Service parameters
service:
  type: 
    value: LoadBalancer
  port: 
    value: 1433
  annotations: {}
deployment:
  annotations: {}
# Volumes & persistence parameters
persistence:
  enabled: true
  # existingDataClaim:
  # existingTransactionLogClaim:
  # existingBackupClaim:
  # existingMasterClaim:
  # existingAppClaim:
  # existingTempdbClaim:
  storageClass: ""
  dataAccessMode: ReadWriteOnce
  dataSize: 5Gi
  transactionLogAccessMode: ReadWriteOnce
  transactionLogSize: 5Gi
  tempdbAccessMode: ReadWriteOnce
  tempdbSize: 5Gi
  backupAccessMode: ReadWriteOnce
  backupSize: 5Gi
  masterAccessMode: ReadWriteOnce
  masterSize: 5Gi
  appAccessMode: ReadWriteOnce
  appSize: 5Gi
# Probe parameters
livenessprobe:
  initialDelaySeconds: 20
  periodSeconds: 15
readinessprobe:
  initialDelaySeconds: 20
  periodSeconds: 15
# Resourcep parameters
resources:
  limits:
  #  cpu: 100m
    memory: 5Gi
  # requests:
  #  cpu: 100m
  #  memory: 2Gi
nodeSelector: {}
  # kubernetes.io/hostname: minikube

Let’s install my environment release from the helm command below:

$ helm install --name sqlhelm . --set DMK.value=Y --set service.port.value=1451

 

Pretty simple right? Note also that I may change predefined parameter values according to my context very easily. For instance, the DMK maintenance tool is not installed by default when the container is spin up by default and I changed it by explicitly setup the DMK.value to Y. The same applies for the SQL Server port exposed through the service, by default 1433 changed to 1451 in my helm command.

The result is as follows:

LAST DEPLOYED: Mon Dec 17 23:23:26 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME                              STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
sqlhelm-dbi-mssql-linux-app       Bound   pvc-5faffb52-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-backup    Bound   pvc-5fb0c43a-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-data      Bound   pvc-5fb32657-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-tempdb    Bound   pvc-5fb680fe-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-translog  Bound   pvc-5fbb9350-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s

==> v1/Service
NAME                     TYPE          CLUSTER-IP   EXTERNAL-IP  PORT(S)         AGE
sqlhelm-dbi-mssql-linux  LoadBalancer  10.99.4.205  localhost    1451:32569/TCP  8m57s

==> v1beta2/Deployment
NAME                     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
sqlhelm-dbi-mssql-linux  1        1        1           1          8m57s

==> v1/Pod(related)
NAME                                      READY  STATUS   RESTARTS  AGE
sqlhelm-dbi-mssql-linux-67c4898dfb-qlfgr  1/1    Running  0         8m56s

==> v1/Secret
NAME                                 TYPE    DATA  AGE
sqlhelm-dbi-mssql-linux-user-secret  Opaque  1     8m57s
sqlhelm-dbi-mssql-linux-sa-secret    Opaque  1     8m57s

 

This command provides a picture of the deployed components and their different status including the persistent volume claims, my SQL Server pod, the service that exposes the SQL Server port and the K8s secrets for sa and my “application” user passwords. This picture is available at any moment by executing the following command:

$ helm status sqlhelm

 

We may also retrieve a list of existing releases from the following helm command:

$ helm ls sqlhelm
NAME    REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
sqlhelm 1               Mon Dec 17 23:23:26 2018        DEPLOYED        dbi-mssql-linux-1.0.0   1.0             default

 

It’s worth noting that each resource is identified by labels (a very powerful feature on K8s) and we may easily get components installed and related to my release by filtering by the corresponding label (app or release) as follows:

$ kubectl get all -l release=sqlhelm
NAME                                           READY     STATUS    RESTARTS   AGE
pod/sqlhelm-dbi-mssql-linux-67c4898dfb-qlfgr   1/1       Running   1          16h

NAME                              TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/sqlhelm-dbi-mssql-linux   LoadBalancer   10.99.4.205   localhost     1451:32569/TCP   16h

NAME                                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/sqlhelm-dbi-mssql-linux   1         1         1            1           16h

NAME                                                 DESIRED   CURRENT   READY     AGE
replicaset.apps/sqlhelm-dbi-mssql-linux-67c4898dfb   1         1         1         16h

 

Let’s just take a look at my SQL Server pod log and let’s focus on the different custom steps applied during the startup of the corresponding pod. All the custom steps are well executed with input values from the values.yaml files.

kubectl logs sqlhelm-dbi-mssql-linux-67c4898dfb-qlfgr

======= 2018-12-17 22:29:44 Configuring tempdb database files placement OK =======
======= 2018-12-17 22:29:44 Configuring max server memory =======
2018-12-17 22:29:45.01 spid51      Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
2018-12-17 22:29:45.03 spid51      Configuration option 'max server memory (MB)' changed from 2147483647 to 3840. Run the RECONFIGURE statement to install
…
======= 2018-12-17 22:29:45 Configuring max server memory OK =======
======= 2018-12-17 22:29:45 Creating login dbi_user =======
======= 2018-12-17 22:29:45 Creating login dbi_user OK =======
======= 2018-12-17 22:29:45 Installing DMK =======
Changed database context to 'master'.
Creating dbi_tools...
======= 2018-12-17 22:30:08 Installing DMK OK =======
======= MSSQL CONFIG COMPLETED =======

 

Finally let’s connect from mssql-cli utility to my SQL Server pod and let’s check if everything is ok from a configuration perspective:

master> select name as logical_name, physical_name
....... from sys.master_files;
+----------------+-------------------------------------------------+
| logical_name   | physical_name                                   |
|----------------+-------------------------------------------------|
| master         | /u01/sqlserverdata/mssqlserver/master.mdf       |
| mastlog        | /u01/sqlserverdata/mssqlserver/mastlog.ldf      |
| tempdev        | /u03/sqlservertempdb/mssqlserver/tempdb.mdf     |
| templog        | /u03/sqlservertempdb/mssqlserver/templog.ldf    |
| tempdbdev_2    | /u03/sqlservertempdb/mssqlserver/tempdb2.ndf    |
| tempdbdev_3    | /u03/sqlservertempdb/mssqlserver/tempdb3.ndf    |
| tempdbdev_4    | /u03/sqlservertempdb/mssqlserver/tempdb4.ndf    |
| modeldev       | /u01/sqlserverdata/mssqlserver/model.mdf        |
| modellog       | /u01/sqlserverdata/mssqlserver/modellog.ldf     |
| MSDBData       | /u01/sqlserverdata/mssqlserver/MSDBData.mdf     |
| MSDBLog        | /u01/sqlserverdata/mssqlserver/MSDBLog.ldf      |
| dbi_tools      | /u01/sqlserverdata/mssqlserver/dbi_tools.mdf    |
| dbi_tools_log  | /u02/sqlserverlog/mssqlserver/dbi_tools_log.ldf |
+----------------+-------------------------------------------------+

 

The database file placement meets my flexible architecture requirements. The DMK maintenance tool is also deployed correctly with the dbi_tools database as show below:

master> use dbi_tools;
Commands completed successfully.
Time: 0.253s
dbi_tools> select name as table_name
.......... from sys.tables;
+-----------------------------------+
| table_name                        |
|-----------------------------------|
| dbi_maintenance_task_logs         |
| dbi_maintenance_task_details_logs |
| dbi_maintenance_configuration     |
| __RefactorLog                     |
+-----------------------------------+

 

sa and dbi_user (name by default in my template) logins are available for sysadmin and classical user connections.

master> select name AS login_name
....... from sys.server_principals
....... where type = 'S' and name not like '##%##';
+--------------+
| login_name   |
|--------------|
| sa           |
| dbi_user     |
+--------------+

 

Deployment is done successfully! It was a brief overview of Helm capabilities with SQL Server and other write-ups will come soon!

Happy deployment!

 

Cet article Deploying SQL Server on K8s with Helm charts est apparu en premier sur Blog dbi services.

[Blog] Troubleshooting: Instance Compute Error: Shape VM.Standard1.1 Not Found

Online Apps DBA - Tue, 2018-12-18 08:22

Instance Compute Error: Shape VM.Standard1.1 Not Found is one of the most Common Error that Can Cut your Way while Creating a Database Instance in Oracle Cloud But Don’t Worry!!!! Visit: https://k21academy.com/1z016015 if you are hitting this issue and get to know all about this issue, Its Cause and the Solution to this problem in […]

The post [Blog] Troubleshooting: Instance Compute Error: Shape VM.Standard1.1 Not Found appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

NULL predicate

Jonathan Lewis - Tue, 2018-12-18 07:13

People ask me from time to time if I’m going to write another book on the Cost Based Optimizer – and I think the answer has to be no because the product keeps growing so fast it’s not possible to keep up and because there are always more and more little details that might have been around for years and finally show up when someone asks me a question about some little oddity I’ve never noticed before.

The difficult with the “little oddities” is the amount of time you could spend trying to work out whether or not they matter and if it’s worth writing about them. Here’s a little example to show what I mean – first the data set:


rem
rem     Script:         null_filter.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2018
rem     Purpose:
rem
rem     Last tested
rem             18.3.0.0
rem             12.1.0.2
rem

create table t1
nologging
as
select  *
from    all_objects
where   rownum <= 50000 -- > comment to avoid wordpress format issue
;

insert into t1 select * from t1;
insert into t1 select * from t1;
insert into t1 select * from t1;
commit;

create index t1_i1 on t1(object_type, data_object_id, object_id, created);

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                cascade     => true,
                method_opt  => 'for all columns size 1'
        );
end;
/

It’s a simple data set with a single index. The only significant thing about the index is that the second column (data_object_id) is frequently null. This leads to a little quirk in the execution plans for a very similar pair of statements:


set serveroutput off
alter session set statistics_level = all;

select
        object_name, owner
from
        t1
where
        object_type = 'TABLE'
and     data_object_id = 20002
and     object_id = 20002
and     created > trunc(sysdate - 90)
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

select
        object_name, owner
from
        t1
where
        object_type = 'TABLE'
and     data_object_id is null
and     object_id = 20002
and     created > trunc(sysdate - 90)
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

How much difference would you expect in the execution plans for these two queries? There is, of course, the side effect of the “is null” predicate disabling the “implicit column group” that is the index distinct_keys value, but in this case I’ve got a range-based predicate on one of the columns so Oracle won’t be using the distinct_keys anyway.

Of course there’s the point that you can’t use the equality operator with null, you have to use “is null” – and that might make a difference, but how ? Here are the two execution plan:


----------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |      1 |        |      0 |00:00:00.01 |       3 |      1 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |      1 |      0 |00:00:00.01 |       3 |      1 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |      1 |      1 |      0 |00:00:00.01 |       3 |      1 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("OBJECT_TYPE"='TABLE' AND "DATA_OBJECT_ID"=20002 AND "OBJECT_ID"=20002 AND
              "CREATED">TRUNC(SYSDATE@!-90))

-------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |      1 |        |      0 |00:00:00.01 |       3 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |      1 |      0 |00:00:00.01 |       3 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |      1 |      1 |      0 |00:00:00.01 |       3 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("OBJECT_TYPE"='TABLE' AND "DATA_OBJECT_ID" IS NULL AND "OBJECT_ID"=20002 AND
              "CREATED">TRUNC(SYSDATE@!-90))
       filter(("OBJECT_ID"=20002 AND "CREATED">TRUNC(SYSDATE@!-90)))

The query with the predicate “data_object_id is null” repeats the object_id and sysdate predicates as access predicates and filter predicates. This seems a little surprising and a potential performance threat. In the first query the run_time engine will hit the correct index leaf block in exactly the right place very efficiently and then walk along it supplying every rowid to the parent operator until it hits the end of the range.

With the “is null” plan the run-time engine will be checking the actual value of object_id and created for every index entry on the way – how much extra CPU will this use and, more importantly, might Oracle start with the first index entry where object_type = ‘TABLE’ and data_object_id is null and walk through every index entry that has that null checking for the correct object_id as it goes ?

That last question is the reason for running the query with rowsource execution stats enabled. The first query did a single physical read while the second didn’t have to, but the more important detail is that both queries did the same number of buffer gets – and there is, by the way, a set of eight rows where the object_id and data_object_id are  20,002, but they were created several years ago so the index range scan returns no rows in both cases.

Based on that comparison, how do we show that Oracle has not walked all the way from the first index entry where object_type = ‘TABLE’ and data_object_id is null checking every entry on the way or, to put it another way, has Oracle really managed to prune down the index range scan to the minimum “wedge” indicated by the presence of the predicates “OBJECT_ID”=20002 AND “CREATED”>TRUNC(SYSDATE@!-90) as access predicates?

Let’s just count the number of leaf blocks that might be relevant, using the sys_op_lbid() function (last seen here) that Oracle uses internally to count the number of leaf blocks in an index. First we get the index object_id, then we scan it to see how many leaf blocks hold entries that match our object_type and data_object_id predicates but appear in the index before our target value of 20,002:


column object_id new_value m_index_id

select
        object_id
from
        user_objects
where
        object_type = 'INDEX'
and     object_name = 'T1_I1'
;

select  distinct sys_op_lbid(&m_index_id, 'L', rowid)
from    t1
where   object_type    = 'TABLE'
and     data_object_id is null
and     object_id      < 20002
;


SYS_OP_LBID(159271
------------------
AAAm4nAAFAAACGDAAA
AAAm4nAAFAAACF9AAA
AAAm4nAAFAAACGCAAA
AAAm4nAAFAAACF/AAA
AAAm4nAAFAAACF+AAA
AAAm4nAAFAAACGFAAA
AAAm4nAAFAAACGEAAA
AAAm4nAAFAAACGGAAA

8 rows selected.


This tells us that there are 8 leaf blocks in the index that we would have to range through before we found object_id 20,002 and we would have seen 8 buffer gets, not 3 in the rowsource execution stats, if Oracle had not actually been clever with its access predicates and narrowed down the wedge of the index it was probing.

Bottom line: for a multi-column index there seems to be a difference in execution plans between “column is null” and “column = constant” when the column is one of the earlier columns in the index – but even though the “is null” option results in some access predicates re-appearing as filter predicates in the index range scan the extra workload is probably not significant – Oracle still uses the minimum number of index leaf blocks in the index range scan.

 

Lightning Web Components - The subtleties of Tracking and Wiring

Robert Baillie - Tue, 2018-12-18 07:02
Following on from yesterday's investigations into the behaviour of '@track', and its effect on untracked properties, I figured I should expand my remit to '@wire'. My main reason for that was the statement in the documentation: In the wire adapter’s configuration object, prefix a property with $ to reference a property of the component instance. The $ prefix tells the wire service to treat it as a property of the class and evaluate it as this.propertyName. The property is reactive. If the property’s value changes, new data is provisioned and the component rerenders. This comment relates to code along the lines of: @api recordId; @wire(getRecord, { recordId: '$recordId', fields }) contact; The bit that really piqued my interest was 'The property is reactive'. In all the examples, it looked like the property being referenced was always tracked or set as an api field (the latter implying the former) That's the case in the above example - the property passed as a...

VITAS Healthcare Selects Oracle Cloud Applications to Optimize Its Workforce Management Systems

Oracle Press Releases - Tue, 2018-12-18 07:00
Press Release
VITAS Healthcare Selects Oracle Cloud Applications to Optimize Its Workforce Management Systems Leading provider of end-of-life care streamlines operations across key business processes, metrics, divisions and communication channels

Redwood City, Calif.—Dec 18, 2018

VITAS Healthcare, a leading provider of end-of-life care in the US, has selected Oracle Cloud Applications to streamline core business and finance functions, anticipate and respond to frequently changing workforce needs, and support career success for nearly 12,000 VITAS employees.

A pioneer and leader in the hospice movement since 1978, VITAS operates 47 hospice locations in 14 states and the District of Columbia. Aided by Oracle Cloud Applications, VITAS is set to realize benefits that include coordinated wage and compensation oversight and controls, improved productivity and cost controls, heightened human resource (HR) effectiveness and metrics, more nimble communication workflow, and authorization to mitigate litigation and other risks.

“This upgrade of our human capital management program will provide VITAS with best practices and protocols that will accelerate and sustain our company’s success for another 40 years,” said Nick Westfall, CEO of VITAS. “Our goal is to enhance reporting and analytics capabilities, improve collaboration, optimize and automate core business and administrative processes, reduce turnover and rely on a powerful platform that will enable all of our employees to access powerful insights on their mobile devices.”

With Oracle HCM Cloud, VITAS will be able to optimize talent management and training, access complete workforce insights, increase operational efficiency, and enable its employees to access the software on any device.

“As a leader in the hospice movement in the United States, VITAS Healthcare continues to evolve and streamline its business and patient-care operations,” said Rick Jewell, senior vice president of applications development, Oracle. “Oracle Cloud Applications will enable VITAS to deliver a higher level of support to their patients, employees, vendors and partners.”

Contact Info
Bill Rundle
Oracle PR
650-506-1891
Bill.Rundle@oracle.com
VITAS Healthcare – 40 Years Strong

Established in 1978, VITAS® Healthcare is a pioneer and leader in the American hospice movement. Headquartered in Miami, Florida, VITAS (pronounced VEE-tahs) operates 47 hospice programs in 14 states (California, Connecticut, Delaware, Florida, Georgia, Illinois, Kansas, Missouri, New Jersey, Ohio, Pennsylvania, Texas, Virginia and Wisconsin) and the District of Columbia. VITAS employs 12,145 professionals who care for terminally ill patients daily, primarily in the patients’ homes, and also in the company’s 27 inpatient hospice units as well as in hospitals, nursing homes and assisted living communities/residential care facilities for the elderly. At the conclusion of the third quarter of 2018, VITAS reported an average daily census of 17,976. Visit www.vitas.com.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates.

Talk to a Press Contact

Bill Rundle

  • 650-506-1891

Pharma Companies Can Cut Clinical Trial Randomization Set Up 89 Percent with Oracle

Oracle Press Releases - Tue, 2018-12-18 06:57
Press Release
Pharma Companies Can Cut Clinical Trial Randomization Set Up 89 Percent with Oracle Oracle Clinical One Randomization and Supplies Management Cloud Service reduces set-up time from two months to a few days, speeding clinical research

Redwood Shores, Calif.—Dec 18, 2018

The complexity of clinical research and development has increased exponentially, putting a strain on outdated systems used to support the business processes required to conduct a clinical trial. To alleviate these complexities, Oracle Health Sciences recently launched Clinical One Randomization and Supplies Management Cloud Service (ORS), which reduces the time required for study set up from nearly two months to just a few days. Since its introduction, more than one thousand patients have already been screened, and 78 percent of them randomized, through the cloud service.

"At Oyster Point Pharma, we are developing a revolutionary new treatment for dry eye disease, and with the Oracle Randomization and Supplies Management Cloud Service, we have been able to significantly reduce our clinical trial setup time. In 2018 alone, as a start-up company, we initiated and completed 3 separate phase 2b trials using the Oracle Health Sciences cloud service," said Jeffrey Nau, president and CEO, Oyster Point Pharma.

A recent survey of executives from pharma and contract research organizations highlighted the top challenges in Randomization and Trial Supplies Management (RTSM) in clinical trials. For more than 70 percent of respondents, slowness to build, test and deploy new trials; integration with other platforms and lack of flexibility; and inability to support study changes were paramount.

"Our mission is to support our pharmaceutical and CRO customers in bringing drugs to market faster, and more efficiently," said Steve Rosenberg, general manager, Oracle Health Sciences. "We continue to drive new innovations to simplify the building, launching, and operation of clinical trials. ORS represents a major step in that regard by greatly reducing the build time. Each day saved during a clinical trial is one day closer to bringing a new therapy to market for patients waiting in need."

A breakthrough in conducting trials, Oracle Clinical One Randomization and Supplies Management enables pharmaceutical companies to gain new levels of:

  • Speed and Flexibility – self-service point and click trial design and set up in several days whether through a CRO or clinical R&D staff
  • Simplicity and Control – ability to make mid-study protocol changes with the click of a mouse
  • Integration and Efficiency – ORS is easily integrated with Oracle’s existing electronic data capture (EDC) solution, InForm, and seamlessly integrates with other, existing EDC solutions on the market
  • Data Unification – because ORS is part of the Clinical One eClinical platform, data can be shared across people, processes and systems throughout the entire lifecycle of a trial
Contact Info
Valerie Beaudett
Oracle
+1 650.400.7833
Valerie.Beaudett@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650.400.7833

Lightning Web Components - The subtleties of Tracking and Wiring

Rob Baillie - Tue, 2018-12-18 06:31

Following on from yesterday's investigations into the behaviour of '@track', and its effect on untracked properties, I figured I should expand my remit to '@wire'.

My main reason for that was the statement in the documentation:

In the wire adapter’s configuration object, prefix a property with $ to reference a property of the component instance. The $ prefix tells the wire service to treat it as a property of the class and evaluate it as this.propertyName. The property is reactive. If the property’s value changes, new data is provisioned and the component rerenders.

This comment relates to code along the lines of:


@api recordId;

@wire(getRecord, { recordId: '$recordId', fields })
contact;

The bit that really piqued my interest was 'The property is reactive'.

In all the examples, it looked like the property being referenced was always tracked or set as an api field (the latter implying the former)

That's the case in the above example - the property passed as a parameter 'recordId' as referenced by '$recordId' is defined as an api field, and is therefore tracked.

There's also that recurring point 'the component rerenders', which we saw in my previous post. Recalling that, it essentially meant that a change to a tracked property caused all untracked properties in the same component (and I am assured, but yet to prove, child components) to be re-rendered

So, what is the actual behaviour in this case? Are the implications the same?

You can code that illustrates the below examples, as well as the points from yesterday's post in this bitbucket repository if you want to explore the behaviours further. I suggest you take a look.

Tracked property used as a parameter to a wired property

The natural first example is much like the one exampled above. We have an Apex method that takes a parameter. We wire that parameter to a javascript property, and reference a single parameter.

Javascript component

@track searchString;

@wire(searchContacts, { searchString: '$searchString' })
contacts;

When the searchString property is updated:

  • The searchString's change causes the wired function to be re-evaluated.
  • Therefore the value of 'contacts' changes.
  • The component treats contacts as if it was tracked and causes the component to be re-rendered.

OK. That's pretty decent, and what the documentation says will happen. No surprises there!

But then I got to thinking: So, what happens if the property isn't tracked?

My first guess was that maybe the Apex method wouldn't get re-executed, but I wasn't entirely convinced - after all, the documentation only really makes a big deal about rendering. So I changed the code and tried again...

Javascript component

searchString;

@wire(searchContacts, { searchString: '$searchString' })
contacts;

This time, when the searchString property is updated:

  • The searchString's change causes the wired function to be re-evaluated.
  • Therefore the value of 'contacts' changes.
  • The component treats contacts as if it was tracked and causes the component to be re-rendered.

Erm. OK. It's the same. Hmmm....

Looking at what's actually going on, that does make sense. It's the '$searchString' reference that tells the framework that searchString is reactive, as respect to the wiring, so it's that causing the Apex to re-execute. And once the method is re-evaluated, the value of 'contacts' changes, and that causes the component to re-render.

That go me to thinking - so what does the '@track' do against the searchString in the original example. Experience tells me that these things generally will have an effect.

So I added something else to my example...


renderedCallback() {
console.log( 'renderedCallback was called' );
}

This hooks into the component's lifecycle, as described here, and will tell us when the component gets re-rendered.

It turns out that my initial understanding of the first example was slightly wrong, though not in a way that would generally have much of an impact.

That is, if the parameter is tracked, you end up with the following:

  • The searchString's change causes:
    • The component to be re-rendered (since it is tracked).
    • The wired function to be re-evaluated (since it is referenced as a '$' parameter).
  • The execution of the wired function causes the value of 'contacts' to change.
  • The component treats contacts as if it was tracked and causes the component to be re-rendered.

The result is that the component is re-rendered twice!

And sure enough, if you take the tracking off the searchString parameter, the component is only re-rendered once.

So, this does reinforce another point that the documentation makes here:

Don’t overuse @track. Track a property only if you need the component to rerender when the property’s value changes.

Personally, I think I'd call out this particular behaviour and remind people - you don't need to 'track' a property in order for the wired method to re-execute - but maybe it's just my lack of attention that missed that little point.

And I'd also state that you should only track a property that is being used as a reactive wired parameter if you need the property to be re-rendered before the Apex you are calling returns with its result.

Node-oracledb Connection Samples: Proxies and External Authentication

Christopher Jones - Mon, 2018-12-17 22:19

We recently reviewed 'external authentication' and 'proxy connections' support in node-oracledb and did a few tweaks that will appear in the future node-oracledb 3.1.

You can use 'external authentication' to connect without requiring a password being stored in your Node.js applications. This is useful, for example, to authenticate via LDAP or use an Oracle Wallet.

The idea of a proxy connection is to create a schema in one database user name. Privilege is granted on that schema to other database users so they can access the schema and manipulate its data. This aids three-tier applications where one user owns the schema while multiple end-users access the data.

You can use external authentication and proxy connections together or separately.

In the 'so I can find it again' category, here are some behaviors of node-oracledb connections.

For ease of testing, my external authentication was via Operating System authentication using an 'OPS$' schema. Since my database was remote to the node-oracledb application I used the INSECURE setting 'alter system set remote_os_authent=true scope=spfile;'. Do not do use this in real life!

SQL:

create user mynormaluser identified by mynormaluserpw; grant create session to mynormaluser; create user myproxyuser identified by myproxyuserpw; grant create session to myproxyuser; create user mysessionuser1 identified by doesnotmatter; grant create session to mysessionuser1; alter user mysessionuser1 grant connect through myproxyuser; -- I logged into my computer as the 'oracle' OS user: create user ops$oracle identified externally; grant connect, resource to ops$oracle; alter user ops$oracle default tablespace users; alter user mysessionuser2 grant connect through ops$oracle;

JavaScript:

const oracledb = require('oracledb'); async function ShowUserInfo(conn) { let result = await conn.execute(` select sys_context('USERENV', 'PROXY_USER'), sys_context('USERENV', 'SESSION_USER') from dual`); console.log(" Proxy User:", result.rows[0][0]); console.log(" Session User:", result.rows[0][1]); console.log(); }; (async function() { let conn, pool, config, testdesc; // -------------------- STANDALONE CONNECTIONS -------------------- console.log("(1) Standalone: Basic Auth"); // Gives: // Proxy User: null // Session User: MYNORMALUSER try { config = { connectString: "localhost/orclpdb", user: "mynormaluser", password: "mynormaluserpw" }; conn = await oracledb.getConnection(config); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(2) Standalone: External Auth"); // Gives: // Proxy User: null // Session User: OPS$ORACLE try { config = { connectString: "localhost/orclpdb", externalAuth: true }; conn = await oracledb.getConnection(config); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(3) Standalone: Basic Auth with proxy"); // Gives: // Proxy User: MYPROXYUSER // Session User: MYSESSIONUSER1 try { config = { connectString: "localhost/orclpdb", user: "myproxyuser[mysessionuser1]", password: "myproxyuserpw" }; conn = await oracledb.getConnection(config); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(4) Standalone: External Auth with proxy in brackets"); // With node-oracledb 3.0 gives: // DPI-1032: user name and password cannot be set when using external authentication // With node-oracledb 3.1 gives: // Proxy User: OPS$ORACLE // Session User: MYSESSIONUSER2 try { config = { connectString: "localhost/orclpdb", user: "[mysessionuser2]", externalAuth: true }; conn = await oracledb.getConnection(config); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(5) Standalone: External Auth with proxy"); // With node-oracledb 3.0 gives: // DPI-1032: user name and password cannot be set when using external authentication // With node-oracledb 3.1 gives: // DPI-1069: proxy user name must be enclosed in [] when using external authentication try { config = { connectString: "localhost/orclpdb", user: "mysessionuser2", externalAuth: true }; conn = await oracledb.getConnection(config); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } // -------------------- POOLED CONNECTIONS -------------------- console.log("(6) Pooled: Basic Auth"); // Gives: // Proxy User: null // Session User: MYNORMALUSER try { config = { connectString: "localhost/orclpdb", user: "mynormaluser", password: "mynormaluserpw" }; pool = await oracledb.createPool(config); conn = await pool.getConnection(); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(7) Pooled: External Auth"); // Gives: // Proxy User: null // Session User: OPS$ORACLE try { config = { connectString: "localhost/orclpdb", externalAuth: true }; pool = await oracledb.createPool(config); conn = await pool.getConnection(); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(8) Pooled: Basic Auth with proxy in pool creation"); // Gives: // Proxy User: MYPROXYUSER // Session User: MYSESSIONUSER1 try { config = { connectString: "localhost/orclpdb", user: "myproxyuser[mysessionuser1]", password: "myproxyuserpw" }; pool = await oracledb.createPool(config); conn = await pool.getConnection(); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(9) Pooled: Basic Auth with proxy in connection"); // Gives: // Proxy User: MYPROXYUSER // Session User: MYSESSIONUSER1 try { config = { connectString: "localhost/orclpdb", user: "myproxyuser", password: "myproxyuserpw", homogeneous: false }; pool = await oracledb.createPool(config); conn = await pool.getConnection({ "user": "mysessionuser1" }); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(10) Pooled: Basic Auth with proxy in brackets in connection"); // Gives: // ORA-00987 missing or invalid username(s) try { config = { connectString: "localhost/orclpdb", user: "myproxyuser", password: "myproxyuserpw", homogeneous: false }; pool = await oracledb.createPool(config); conn = await pool.getConnection({ "user": "[mysessionuser1]" }); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(11) Pooled: External Auth with proxy in brackets in pool creation"); // Gives: // DPI-1032 user name and password cannot be set when using external authentication try { config = { connectString: "localhost/orclpdb", user: "[mysessionuser2]", externalAuth: true }; pool = await oracledb.createPool(config); conn = await pool.getConnection(); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(12) Pooled: External Auth with proxy in pool creation"); // Gives: // DPI-1032: user name and password cannot be set when using external authentication try { config = { connectString: "localhost/orclpdb", user: "mysessionuser2", externalAuth: true }; pool = await oracledb.createPool(config); conn = await pool.getConnection(); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(13) Pooled: External Auth with proxy in brackets in connection"); // Using Oracle 12.2 client libraries gives // Proxy User: null // Session User: OPS$ORACLE // Using Oracle 18.3 client libraries gives // Proxy User: OPS$ORACLE // Session User: MYSESSIONUSER2 // This was an enhancement in Oracle 18.3 try { config = { connectString: "localhost/orclpdb", externalAuth: true }; pool = await oracledb.createPool(config); conn = await pool.getConnection({ user: "[mysessionuser2]" }); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(14) Pooled: External Auth with proxy in connection"); // With node-oracledb 3.0 gives: // Proxy User: null // Session User: OPS$ORACLE // With node-oracledb 3.1 gives: // DPI-1069: proxy user name must be enclosed in [] when using external authentication // This change in node-oracledb 3.1 prevents connecting with an unexpected session user try { config = { connectString: "localhost/orclpdb", externalAuth: true }; pool = await oracledb.createPool(config); conn = await pool.getConnection({ user: "mysessionuser2" }); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } })();

Summary:

  • (4) "Standalone: External Auth with proxy": Fixed in node-oracledb 3.1

  • (5) "Standalone: External Auth with proxy no brackets": Error message was improved in node-oracledb 3.1

  • (13) "Pooled: External Auth with proxy in brackets in connection": Works with Oracle Client 18.3+

  • (14) "Pooled: External Auth with proxy in connection": In node-oracledb 3.1 connection fails due to improved validation. In node-oracledb 3.0 connection succeeded despite the options being inconsistent.

We've had some discussions about the use of "[]" brackets - which is a standard Oracle syntax passed through to the Oracle client libraries. We could have done some manipulation in node-oracledb for consistency, but we decided not to make node-oracledb behave differently than other Oracle language interfaces.

Business case problem

Tom Kyte - Mon, 2018-12-17 22:06
I have a problem with that i have table emp_table(name, salary, department_name) I want to move data from this table to two tables employees(name,salary,department_id), department(department_name) And i putted a trigger on departments to make ...
Categories: DBA Blogs

Cannot append more than 32kb data to a clob variable

Tom Kyte - Mon, 2018-12-17 22:06
Hello, I am trying to write a pl/sql procedure(oracle) in which I am storing the final output in a clob variable, but it gives me the error when I test the procedure by providing the input: [Error] Execution (1: 1): ORA-06502: PL/SQL: numeric or ...
Categories: DBA Blogs

Perf : Insert into Select Vs For All

Tom Kyte - Mon, 2018-12-17 22:06
I need to perform db operations where I need to inserts .5 million reocrds from another complex query involving joins,exists etc: As of now I am using INSERT INTO SELECT inside a pl/sq package. but some time I am observing for same data set t...
Categories: DBA Blogs

INSERT IF NOT EXISTS

Tom Kyte - Mon, 2018-12-17 22:06
I HAVE 1 table Create table TB_COBA1 (ID NUMBER , NIS NUMBER , NILAI_A NUMBER , semester number); Create table TB_COBA2 (ID NUMBER , NIS NUMBER , NILAI_b NUMBER , semester number); SQL> Insert into TB_COBA1 values (1,1,1,1); SQL> Inse...
Categories: DBA Blogs

Error with cursor syntax insert where not exists

Tom Kyte - Mon, 2018-12-17 22:06
i have a new table called jobs_new(job_id , job_title , min_salary , max_salary) and it have only one row with the job_id 'ST_MAN' i want to make anonymous block to copy the other rows from jobs table except when the job_id is 'ST_MAN' because it...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator