Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 2 hours 17 min ago

Deploying SQL Server on K8s with Helm charts

Tue, 2018-12-18 10:06

During the last DockerCon EU in Barcelona, I heard a lot about Helm with K8s architectures. It was also a good opportunity to write about it after attending to this conference.

blog 149 - 0 - banner

In a nutshell, Helm is a package manager for K8s and you may think of it like the other ones available on the Linux side with apt, yum or zypper to cite few of them. Helm charts are a key component of the Helm architecture and make deployments easy, standardized and reusable and this is definitely what I was looking for our current CI/CD pipeline implementation for DMK maintenance tool.

Helm matters for enterprise-scale deployments by addressing common challenges with the following (non-exhaustive) list of capabilities

  • Helm charts can be shared easily across the enterprise or with contributors over the world from GitHub repository.
  • Using helm charts allow to get quickly a specific environment for testing
  • Existing charts can be authored for specific deployments regarding the context
  • The easy deployment and deletion of applications make the Helm adoption easier
  • Production- ready packages are possible and eliminate deployment errors due to incorrect configuration files and reduce the complexity of maintaining application catalog

In my case, it’s been a while since I have in mind to simplify my first SQL Server container deployments on K8s with a complex YAML file including a lot of objects like services, pods, secrets and persistent volumes with Helm charts. One additional motivation was the capability to change in-flight some preconfigured settings in the deployment when I wanted to switch from my minikube environment to my AKS cluster on Azure.

In this first write-up I used a custom dbi services image for SQL Server (a production-ready docker image) and I decided to use this image as based of my custom Helm chart. First of all, let’s say I didn’t start from scratch and I used the mssql-linux stable chart available of GitHub but obviously I customized it for my own requirements:

  • The custom dbi services image for SQL Server includes the creation of the flexible architecture and I had to update the persistence volume and claims configuration with this new storage map.
  • The custom image leverages the deployment of our DMK maintenance tool (optional) that includes different SQL objects to perform maintenance of customer databases (basically update stats, rebuild index and backup tasks). So, I needed to add a parameter to enable or not the deployment of this tool inside the pod.
  • TSQL scripts are also executed during the container startup and they apply different server level configuration, configure tempdb database files placement and add some trace flags to meet our best practices. But no real impact on the helm chart here.
  • An “application” user may be created (optional) and will be part of the db_creator server role according to the least privilege principle. In most cases we consider an application doesn’t need sysadmin privileges even on a SQL Server pod and more generally speaking on microservice architectures. So as already done for the DMK parameter described previously, I had to add another one parameter for creating this user when the pod is spin up.

Let’s first begin with my helm chart hierarchy folder which includes important files including Chart.yaml, values.yaml and deployment.yaml.

[dab@DBI-LT-DAB:#]> tree /f
…
T:.
│   .helmignore
│   Chart.yaml
│   values.yaml
│
├───charts
└───templates
        deployment.yaml
        NOTES.txt
        pvc-app.yaml
        pvc-backup.yaml
        pvc-data.yaml
        pvc-tempdb.yaml
        pvc-tranlog.yaml
        secret.yaml
        service.yaml
        _helpers.tpl

 

Let’s focus on the deployment.yaml file and the customized part within the spec.containers.env section related to my docker image specifications:

  • MSSQL_USER, MSSQL_USER_PASSWORD are environment variables related to my “application” user
  • DMK environment variable enables deployment of the DMK maintenance tool

In addition, the environment variables related to the database file placement have been customized for master, tempdb and user databases according to my flexible architecture specifications with:

  • /u00 (for application files)
  • /u01 (for user data and system database files)
  • /u02 (for transaction log files)
  • /u03 (for tempdb database files)
  • /u98 (for backup files).

MountPaths and persistent volume claims section have also been updated accordingly as shown below:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{ template "mssql.fullname" . }}
  labels:
    app: {{ template "mssql.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
{{- if .Values.deployment.annotations }}
  annotations:
{{ toYaml .Values.deployment.annotations | indent 4 }}
{{- end }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ template "mssql.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ template "mssql.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            - name: ACCEPT_EULA
              value: "{{ .Values.acceptEula.value | upper }}"
            - name: MSSQL_PID
              value: "{{ .Values.edition.value }}"
            - name: MSSQL_SA_PASSWORD
              valueFrom:
               secretKeyRef:
                 name: {{ template "mssql.fullname" . }}-sa-secret
                 key: sapassword
            - name: MSSQL_USER
              value: "{{ .Values.usersql.value }}"
            - name: MSSQL_USER_PASSWORD
              valueFrom:
               secretKeyRef:
                 name: {{ template "mssql.fullname" . }}-user-secret
                 key: userpassword
            - name: DMK
              value: "{{ .Values.DMK.value }}"
            - name: MSSQL_MASTER_DATA_FILE
              value: /u01/sqlserverdata/mssqlserver/master.mdf
            - name: MSSQL_MASTER_LOG_FILE
              value: /u01/sqlserverdata/mssqlserver/mastlog.ldf
            - name: MSSQL_DATA_DIR
              value: /u01/sqlserverdata/mssqlserver
            - name: MSSQL_LOG_DIR
              value: /u02/sqlserverlog/mssqlserver
            - name: MSSQL_TEMPDBDATA_DIR
              value: /u03/sqlservertempdb/mssqlserver
            - name: MSSQL_TEMPDBLOG_DIR
              value: /u03/sqlservertempdb/mssqlserver
            - name: MSSQL_BACKUP_DIR
              value: /u98/sqlserver/backup/mssqlserver
            - name: MSSQL_ERROR_LOG
              value: /u00/app/sqlserver/admin/mssqlserver/log
            - name: MSSQL_DUMP_DIR
              value: /u00/app/sqlserver/admin/mssqlserver/dump
            - name: MSSQL_TCP_PORT
              value: "{{ .Values.service.port.value }}"
            - name: MSSQL_LCID
              value: "{{ .Values.lcid.value }}"
            - name: MSSQL_COLLATION
              value: "{{ .Values.collation.value }}"
            - name: MSSQL_ENABLE_HADR
              value: "{{ .Values.hadr.value }}"
            {{ if .Values.resources.limits.memory }}
            - name: MSSQL_MEMORY_LIMIT_MB
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            {{ end }}
          ports:
            - name: mssql
              containerPort: {{ .Values.service.port.value }}
          volumeMounts:
            - name: data
              mountPath: /u01
            - name: transactionlog
              mountPath: /u02
            - name: tempdb
              mountPath: /u03
            - name: backup
              mountPath: /u98 
            - name: app
              mountPath: /u00
          livenessProbe:
            tcpSocket:
               port: mssql
            initialDelaySeconds: {{ .Values.livenessprobe.initialDelaySeconds }}
            periodSeconds: {{ .Values.livenessprobe.periodSeconds }}
          readinessProbe:
            tcpSocket:
               port: mssql
            initialDelaySeconds: {{ .Values.readinessprobe.initialDelaySeconds }}
            periodSeconds: {{ .Values.readinessprobe.periodSeconds }}
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- if .Values.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
    {{- end }}
      volumes:
      - name: master
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingMasterClaim }}
          claimName: {{ .Values.persistence.existingMasterClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-master
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: data
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingDataClaim }}
          claimName: {{ .Values.persistence.existingDataClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-data
          {{- end -}}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: transactionlog
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingTransactionLogClaim }}
          claimName: {{ .Values.persistence.existingTransactionLogClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-translog
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: tempdb
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingTempdbClaim }}
          claimName: {{ .Values.persistence.existingTempdbClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-tempdb
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: backup
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingBackupClaim }}
          claimName: {{ .Values.persistence.existingBackupClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-backup
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: app
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingApppClaim }}
          claimName: {{ .Values.persistence.existingAppClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-app
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}

 

Referring to my flexible architecture, I added 2 YAML files that contain the new persistent volumes definition for respectively pvc-app for /u00 (app) and pvc-tempdb for /u03 (tempdb).

Here the content of my persistent volume claim for tempdb for instance:

{{- if and .Values.persistence.enabled (not .Values.persistence.existingTempdbClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: {{ template "mssql.fullname" . }}-tempdb
  labels:
    app: {{ template "mssql.fullname" . }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
{{- if .Values.persistence.annotations }}
  annotations:
{{ toYaml .Values.persistence.annotations | indent 4 }}
{{- end }}
spec:
  accessModes:
    - {{ .Values.persistence.tempdbAccessMode | quote }}
  resources:
    requests:
      storage: {{ .Values.persistence.tempdbSize | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
  storageClassName: ""
{{- else }}
  storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

 

I added to the secret.yaml to include a section dedicated to my “application” user password

---
apiVersion: v1
kind: Secret
metadata:
  name: {{ template "mssql.fullname" . }}-user-secret
  labels:
    app: {{ template "mssql.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
type: Opaque
data:
  {{ if .Values.userpassword }}
  userpassword:  {{ .Values.userpassword.value | b64enc | quote }}
  {{ else }}
  userpassword: {{ randAlphaNum 20 | b64enc | quote }}
{{ end }}

 

Note the helm chart allows you to define your own password or if empty it will generate an random password instead.

Finally, the values.yaml file contains predefined values for my release deployment

# General parameters
acceptEula: 
  value: "Y"
edition: 
  value: "Developer"
DMK: 
  value: "N"
collation: 
  value: SQL_Latin1_General_CP1_CI_AS
lcid: 
  value: 1033
hadr: 
    value: 0
# User parameters
sapassword: 
  value: Password1
usersql: 
  value: dbi_user
userpassword: 
  value: Password2
# Image parameters
image:
  repository: dbi/mssql-server-linux
  tag: 2017-CU12
  pullPolicy: IfNotPresent
# Service parameters
service:
  type: 
    value: LoadBalancer
  port: 
    value: 1433
  annotations: {}
deployment:
  annotations: {}
# Volumes & persistence parameters
persistence:
  enabled: true
  # existingDataClaim:
  # existingTransactionLogClaim:
  # existingBackupClaim:
  # existingMasterClaim:
  # existingAppClaim:
  # existingTempdbClaim:
  storageClass: ""
  dataAccessMode: ReadWriteOnce
  dataSize: 5Gi
  transactionLogAccessMode: ReadWriteOnce
  transactionLogSize: 5Gi
  tempdbAccessMode: ReadWriteOnce
  tempdbSize: 5Gi
  backupAccessMode: ReadWriteOnce
  backupSize: 5Gi
  masterAccessMode: ReadWriteOnce
  masterSize: 5Gi
  appAccessMode: ReadWriteOnce
  appSize: 5Gi
# Probe parameters
livenessprobe:
  initialDelaySeconds: 20
  periodSeconds: 15
readinessprobe:
  initialDelaySeconds: 20
  periodSeconds: 15
# Resourcep parameters
resources:
  limits:
  #  cpu: 100m
    memory: 5Gi
  # requests:
  #  cpu: 100m
  #  memory: 2Gi
nodeSelector: {}
  # kubernetes.io/hostname: minikube

Let’s install my environment release from the helm command below:

$ helm install --name sqlhelm . --set DMK.value=Y --set service.port.value=1451

 

Pretty simple right? Note also that I may change predefined parameter values according to my context very easily. For instance, the DMK maintenance tool is not installed by default when the container is spin up by default and I changed it by explicitly setup the DMK.value to Y. The same applies for the SQL Server port exposed through the service, by default 1433 changed to 1451 in my helm command.

The result is as follows:

LAST DEPLOYED: Mon Dec 17 23:23:26 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME                              STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
sqlhelm-dbi-mssql-linux-app       Bound   pvc-5faffb52-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-backup    Bound   pvc-5fb0c43a-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-data      Bound   pvc-5fb32657-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-tempdb    Bound   pvc-5fb680fe-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-translog  Bound   pvc-5fbb9350-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s

==> v1/Service
NAME                     TYPE          CLUSTER-IP   EXTERNAL-IP  PORT(S)         AGE
sqlhelm-dbi-mssql-linux  LoadBalancer  10.99.4.205  localhost    1451:32569/TCP  8m57s

==> v1beta2/Deployment
NAME                     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
sqlhelm-dbi-mssql-linux  1        1        1           1          8m57s

==> v1/Pod(related)
NAME                                      READY  STATUS   RESTARTS  AGE
sqlhelm-dbi-mssql-linux-67c4898dfb-qlfgr  1/1    Running  0         8m56s

==> v1/Secret
NAME                                 TYPE    DATA  AGE
sqlhelm-dbi-mssql-linux-user-secret  Opaque  1     8m57s
sqlhelm-dbi-mssql-linux-sa-secret    Opaque  1     8m57s

 

This command provides a picture of the deployed components and their different status including the persistent volume claims, my SQL Server pod, the service that exposes the SQL Server port and the K8s secrets for sa and my “application” user passwords. This picture is available at any moment by executing the following command:

$ helm status sqlhelm

 

We may also retrieve a list of existing releases from the following helm command:

$ helm ls sqlhelm
NAME    REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
sqlhelm 1               Mon Dec 17 23:23:26 2018        DEPLOYED        dbi-mssql-linux-1.0.0   1.0             default

 

It’s worth noting that each resource is identified by labels (a very powerful feature on K8s) and we may easily get components installed and related to my release by filtering by the corresponding label (app or release) as follows:

$ kubectl get all -l release=sqlhelm
NAME                                           READY     STATUS    RESTARTS   AGE
pod/sqlhelm-dbi-mssql-linux-67c4898dfb-qlfgr   1/1       Running   1          16h

NAME                              TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/sqlhelm-dbi-mssql-linux   LoadBalancer   10.99.4.205   localhost     1451:32569/TCP   16h

NAME                                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/sqlhelm-dbi-mssql-linux   1         1         1            1           16h

NAME                                                 DESIRED   CURRENT   READY     AGE
replicaset.apps/sqlhelm-dbi-mssql-linux-67c4898dfb   1         1         1         16h

 

Let’s just take a look at my SQL Server pod log and let’s focus on the different custom steps applied during the startup of the corresponding pod. All the custom steps are well executed with input values from the values.yaml files.

kubectl logs sqlhelm-dbi-mssql-linux-67c4898dfb-qlfgr

======= 2018-12-17 22:29:44 Configuring tempdb database files placement OK =======
======= 2018-12-17 22:29:44 Configuring max server memory =======
2018-12-17 22:29:45.01 spid51      Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
2018-12-17 22:29:45.03 spid51      Configuration option 'max server memory (MB)' changed from 2147483647 to 3840. Run the RECONFIGURE statement to install
…
======= 2018-12-17 22:29:45 Configuring max server memory OK =======
======= 2018-12-17 22:29:45 Creating login dbi_user =======
======= 2018-12-17 22:29:45 Creating login dbi_user OK =======
======= 2018-12-17 22:29:45 Installing DMK =======
Changed database context to 'master'.
Creating dbi_tools...
======= 2018-12-17 22:30:08 Installing DMK OK =======
======= MSSQL CONFIG COMPLETED =======

 

Finally let’s connect from mssql-cli utility to my SQL Server pod and let’s check if everything is ok from a configuration perspective:

master> select name as logical_name, physical_name
....... from sys.master_files;
+----------------+-------------------------------------------------+
| logical_name   | physical_name                                   |
|----------------+-------------------------------------------------|
| master         | /u01/sqlserverdata/mssqlserver/master.mdf       |
| mastlog        | /u01/sqlserverdata/mssqlserver/mastlog.ldf      |
| tempdev        | /u03/sqlservertempdb/mssqlserver/tempdb.mdf     |
| templog        | /u03/sqlservertempdb/mssqlserver/templog.ldf    |
| tempdbdev_2    | /u03/sqlservertempdb/mssqlserver/tempdb2.ndf    |
| tempdbdev_3    | /u03/sqlservertempdb/mssqlserver/tempdb3.ndf    |
| tempdbdev_4    | /u03/sqlservertempdb/mssqlserver/tempdb4.ndf    |
| modeldev       | /u01/sqlserverdata/mssqlserver/model.mdf        |
| modellog       | /u01/sqlserverdata/mssqlserver/modellog.ldf     |
| MSDBData       | /u01/sqlserverdata/mssqlserver/MSDBData.mdf     |
| MSDBLog        | /u01/sqlserverdata/mssqlserver/MSDBLog.ldf      |
| dbi_tools      | /u01/sqlserverdata/mssqlserver/dbi_tools.mdf    |
| dbi_tools_log  | /u02/sqlserverlog/mssqlserver/dbi_tools_log.ldf |
+----------------+-------------------------------------------------+

 

The database file placement meets my flexible architecture requirements. The DMK maintenance tool is also deployed correctly with the dbi_tools database as show below:

master> use dbi_tools;
Commands completed successfully.
Time: 0.253s
dbi_tools> select name as table_name
.......... from sys.tables;
+-----------------------------------+
| table_name                        |
|-----------------------------------|
| dbi_maintenance_task_logs         |
| dbi_maintenance_task_details_logs |
| dbi_maintenance_configuration     |
| __RefactorLog                     |
+-----------------------------------+

 

sa and dbi_user (name by default in my template) logins are available for sysadmin and classical user connections.

master> select name AS login_name
....... from sys.server_principals
....... where type = 'S' and name not like '##%##';
+--------------+
| login_name   |
|--------------|
| sa           |
| dbi_user     |
+--------------+

 

Deployment is done successfully! It was a brief overview of Helm capabilities with SQL Server and other write-ups will come soon!

Happy deployment!

 

Cet article Deploying SQL Server on K8s with Helm charts est apparu en premier sur Blog dbi services.

Red Hat Enterprise Linux 8 – Application Streams

Thu, 2018-12-13 09:05

You may have heard that : Red Hat Enterprise Linux 8 is available for downloading in beta version since few weeks…
You want to download it ? Click here.
RH_EnterpriseLinux8beta_stacked_RGB_BlueA significant change coming with this new version is the way the applications packages are provided. As you know, up to RHEL7 packages were downloaded via repositories listed in .repo files located by default under /etc/yum.repos.d/. This is still the same with RHEL8, but two new major repositories are available in the default redhat.repo files.

In order to get access to them we must of course register my system to a Red Hat Subscription…
[root@rhel8beta1 ~]# subscription-manager register --username xxx.yyy@zzz.com
Registering to: subscription.rhsm.redhat.com:443/subscription
Password:
The system has been registered with ID: e42829a5-8a8e-42d3-a69a-07a1499e9b0e
The registered system name is: rhel8beta1
[root@rhel8beta1 ~]#

…and attach it to a Pool (here the Pool will be chosen automatically) :
[root@rhel8beta1 ~]# subscription-manager attach --auto
Installed Product Current Status:
Product Name: Red Hat Enterprise Linux for x86_64 Beta
Status: Subscribed
[root@rhel8beta1 ~]#

As the /etc/yum.repos.d/redhat.repo is now available, let’s check which repository does it contain :
[root@rhel8beta1 ~]# grep -B1 name /etc/yum.repos.d/redhat.repo
[rhel-8-for-x86_64-rt-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time Beta (Debug RPMs)
--
[rhel-8-for-x86_64-rt-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time Beta (Source RPMs)
--
[rhel-8-for-x86_64-supplementary-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Supplementary Beta (RPMs)
--
[rhel-8-for-x86_64-resilientstorage-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Resilient Storage Beta (Debug RPMs)
--
[fast-datapath-beta-for-rhel-8-x86_64-source-rpms] name = Fast Datapath Beta for RHEL 8 x86_64 (Source RPMs)
--
[codeready-builder-beta-for-rhel-8-x86_64-debug-rpms] name = Red Hat CodeReady Linux Builder Beta for RHEL 8 x86_64 (Debug RPMs)
--
[codeready-builder-beta-for-rhel-8-x86_64-source-rpms] name = Red Hat CodeReady Linux Builder Beta for RHEL 8 x86_64 (Source RPMs)
--
[rhel-8-for-x86_64-appstream-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (Source RPMs)
--
[rhel-8-for-x86_64-nfv-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV Beta (Source RPMs)
--
[rhel-8-for-x86_64-nfv-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV Beta (RPMs)
--
[rhel-8-for-x86_64-resilientstorage-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Resilient Storage Beta (Source RPMs)
--
[codeready-builder-beta-for-rhel-8-x86_64-rpms] name = Red Hat CodeReady Linux Builder Beta for RHEL 8 x86_64 (RPMs)
--
[rhel-8-for-x86_64-supplementary-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Supplementary Beta (Debug RPMs)
--
[rhel-8-for-x86_64-highavailability-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - High Availability Beta (Source RPMs)
--
[rhel-8-for-x86_64-supplementary-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Supplementary Beta (Source RPMs)
--
[rhel-8-for-x86_64-appstream-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
--
[rhel-8-for-x86_64-rt-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time Beta (RPMs)
--
[rhel-8-for-x86_64-appstream-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (Debug RPMs)
--
[fast-datapath-beta-for-rhel-8-x86_64-debug-rpms] name = Fast Datapath Beta for RHEL 8 x86_64 (Debug RPMs)
--
[rhel-8-for-x86_64-resilientstorage-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Resilient Storage Beta (RPMs)
--
[rhel-8-for-x86_64-baseos-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs)
--
[rhel-8-for-x86_64-nfv-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV Beta (Debug RPMs)
--
[fast-datapath-beta-for-rhel-8-x86_64-rpms] name = Fast Datapath Beta for RHEL 8 x86_64 (RPMs)
--
[rhel-8-for-x86_64-baseos-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (Debug RPMs)
--
[rhel-8-for-x86_64-highavailability-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - High Availability Beta (Debug RPMs)
--
[rhel-8-for-x86_64-highavailability-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - High Availability Beta (RPMs)
--
[rhel-8-for-x86_64-baseos-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (Source RPMs)
[root@rhel8beta1 ~]#

AppStream ? BaseOS ? What’s that… This is what we will discover in this blog.

First of all we can see that both are enabled by default :
[root@rhel8beta1 ~]# subscription-manager repos --list-enabled
+----------------------------------------------------------+
Available Repositories in /etc/yum.repos.d/redhat.repo
+----------------------------------------------------------+
Repo ID: rhel-8-for-x86_64-baseos-beta-rpms
Repo Name: Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs)
Repo URL: https://cdn.redhat.com/content/beta/rhel8/8/x86_64/baseos/os
Enabled: 1


Repo ID: rhel-8-for-x86_64-appstream-beta-rpms
Repo Name: Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Repo URL: https://cdn.redhat.com/content/beta/rhel8/8/x86_64/appstream/os
Enabled: 1
[root@rhel8beta1 ~]#

BaseOS

Content in BaseOS is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations“.
This is how Red Hat define it. Over a thousand of packages are available from the BaseOS repository :
[root@rhel8beta1 ~]# yum --disablerepo "*" --enablerepo "rhel-8-for-x86_64-baseos-beta-rpms" list available | wc -l
1145
[root@rhel8beta1 ~]#

You can get the full list here.
Basically, those packages are system-related and are mainly used to manage and configure the OS and services (such as NetworkManager, Chrony, Dracut, aso…). In other words most of them are intended for use by system administrators. So nothing very new here except for the fact that they are all grouped in a unique dedicated repository.

AppStream

The second repository contains much more packages (full list here) :
[root@rhel8beta1 /]# yum --disablerepo "*" --enablerepo "rhel-8-for-x86_64-appstream-beta-rpms" list available | wc -l
4318
[root@rhel8beta1 /]#

Application Stream provides additional user space applications, runtime languages and databases. It replaces the “extra” Repos and the Software Collection. All the content in AppStream is available in two formats : the well known RPM format and a brand new one called “module” which an extension to the RPM format.
A module is a set of RPM packages that are linked together. For exemple, if you want to check which packages are concerned by the Postgresql module, you must use the new “yum module” command :
[root@rhel8beta1 /]# yum module list postgresql
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:47:42 ago on Mon Nov 26 15:13:03 2018.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Name Stream Profiles Summary
postgresql 10 [d] client, default [d] postgresql module
postgresql 9.6 client, default [d] postgresql module


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 /]#

By the way, YUM is no longer the default Packages Manager with RHEL8. The command is still available but it’s actually an alias of the DNF tool (coming from Fedora) :
[root@rhel8beta1 ~]# which yum
/usr/bin/yum
[root@rhel8beta1 ~]# ll /usr/bin/yum
lrwxrwxrwx. 1 root root 5 Oct 15 10:25 /usr/bin/yum -> dnf-3
[root@rhel8beta1 ~]#

If you want to have a look to the main usage differences between YUM and DNF, check that :
[root@rhel8beta1 /]# man yum2dnf
YUM2DNF(8)


NAME
yum2dnf - Changes in DNF compared to YUM
[...] [...]

Let’s go back to our modules. Here is how you can check the packages contained in a module :
[root@rhel8beta1 /]# yum module info postgresql
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs) 2.9 kB/s | 4.1 kB 00:01
Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs) 2.9 kB/s | 4.1 kB 00:01
Name : postgresql
Stream : 10 [d] Version : 20180813131250
Context : 9edba152
Profiles : client, default [d] Default profiles : default
Repo : rhel-8-for-x86_64-appstream-beta-rpms
Summary : postgresql module
Description : This postgresql module has been generated.
Artifacts : postgresql-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-contrib-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-docs-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-plperl-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-plpython3-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-pltcl-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-server-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-server-devel-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-static-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-test-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-test-rpm-macros-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-upgrade-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-upgrade-devel-0:10.5-1.el8+1546+27ad5f8e.x86_64


Name : postgresql
Stream : 9.6
Version : 20180813131400
Context : 9edba152
Profiles : client, default [d] Default profiles : default
Repo : rhel-8-for-x86_64-appstream-beta-rpms
Summary : postgresql module
Description : This postgresql module has been generated.
Artifacts : postgresql-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-contrib-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-docs-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-plperl-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-plpython3-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-pltcl-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-server-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-server-devel-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-static-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-test-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-test-rpm-macros-0:9.6.10-1.el8+1547+210b7007.x86_64


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 /]#

The above output shows that a module can contain several streams. Each stream represent a different version of the application.
Moreover, a module can have couple of profiles. A profile is a set of certain RPM packages selected to be installed together for a particular use-case (server, client, development, minimal install, aso…).

To install an application from the default stream and with the default profile, add the ‘@’ character before the application name :
[root@rhel8beta1 /]# yum install @postgresql
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:25:55 ago on Mon Nov 26 16:27:13 2018.
Dependencies resolved.
=======================================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================================
Installing group/module packages:
postgresql-server x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 5.1 M
Installing dependencies:
libpq x86_64 10.5-1.el8 rhel-8-for-x86_64-appstream-beta-rpms 188 k
postgresql x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 1.5 M
Installing module profiles:
postgresql/default
Enabling module streams:
postgresql 10


Transaction Summary
=======================================================================================================================================================================================
Install 3 Packages


Total download size: 6.7 M
Installed size: 27 M
Is this ok [y/N]: y
[...] [...] [root@rhel8beta1 /]#

You can also use the “yum module install postgresql” command.

Quick check :
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 10.5
[root@rhel8beta1 ~]#

And if you want to install Postgres from an oldest stream and with another profile (here Postgres 9.6 client only) :
[root@rhel8beta1 /]# yum install @postgresql:9.6/client
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:33:45 ago on Mon Nov 26 16:27:13 2018.
Dependencies resolved.
=======================================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================================
Installing group/module packages:
postgresql x86_64 9.6.10-1.el8+1547+210b7007 rhel-8-for-x86_64-appstream-beta-rpms 1.4 M
Installing dependencies:
libpq x86_64 10.5-1.el8 rhel-8-for-x86_64-appstream-beta-rpms 188 k
Installing module profiles:
postgresql/client
Enabling module streams:
postgresql 9.6


Transaction Summary
=======================================================================================================================================================================================
Install 2 Packages


Total download size: 1.6 M
Installed size: 5.8 M
Is this ok [y/N]: y
[...] [...] [root@rhel8beta1 /]#

Check :
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 9.6.10
[root@rhel8beta1 ~]#


[root@rhel8beta1 ~]# yum module list --enabled
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:04:43 ago on Thu Dec 13 08:34:05 2018.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Name Stream Profiles Summary
container-tools 1.0 [d][e] default [d] Common tools and dependencies for container runtimes
postgresql 9.6 [e] client [i], default [d] [i] postgresql module
satellite-5-client 1.0 [d][e] gui, default [d] Red Hat Satellite 5 client packages
virt rhel [d][e] default [d] Virtualization module


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 ~]#

Hummm… Only the module 9.6 is enabled ? Let’s try to enable the version 10 :
[root@rhel8beta1 ~]# yum module enable postgresql:10
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:08:06 ago on Thu Dec 13 07:52:30 2018.
Dependencies resolved.
======================================================================================================================================
Package Arch Version Repository Size
======================================================================================================================================
Switching module streams:
postgresql 9.6 -> 10


Transaction Summary
======================================================================================================================================


Is this ok [y/N]: y
Complete!


Switching module streams does not alter installed packages (see 'module enable' in dnf(8) for details)
[root@rhel8beta1 ~]#

It’s better now :
[root@rhel8beta1 ~]# yum module list --enabled
Failed to set locale, defaulting to C
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:13:22 ago on Thu Dec 13 08:34:05 2018.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Name Stream Profiles Summary
container-tools 1.0 [d][e] default [d] Common tools and dependencies for container runtimes
postgresql 10 [d][e] client [i], default [d] [i] postgresql module
satellite-5-client 1.0 [d][e] gui, default [d] Red Hat Satellite 5 client packages
virt rhel [d][e] default [d] Virtualization module


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 ~]#

But…
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 9.6.10
[root@rhel8beta1 ~]#

…still using 9.6 :-(
After switching from one module to another, we must upgrade the corresponding packages :
[root@rhel8beta1 ~]# yum distro-sync
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:18:40 ago on Thu Dec 13 08:34:05 2018.
Dependencies resolved.
=========================================================§=====================================================================================
Package Arch Version Repository Size
==============================================================================================================================================
Upgrading:
postgresql x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 1.5 M
postgresql-server x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 5.1 M


Transaction Summary
==============================================================================================================================================
Upgrade 2 Packages


Total download size: 6.5 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64.rpm 1.3 MB/s | 5.1 MB 00:03
(2/2): postgresql-10.5-1.el8+1546+27ad5f8e.x86_64.rpm 371 kB/s | 1.5 MB 00:04
--------------------------------------------------------------------------------------------------------------- --------------------------------
Total 1.6 MB/s | 6.5 MB 00:04
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Running scriptlet: postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 1/1
Upgrade: postgresql-10.5-1.el8+1546+27ad5f8e.x86_64
Upgrading : postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 1/4
Upgrade: postgresql-10.5-1.el8+1546+27ad5f8e.x86_64
Upgrade: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64
Running scriptlet: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 2/4
Upgrading : postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 2/4
Running scriptlet: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 2/4
Upgrade: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64
Upgraded: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64
Running scriptlet: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 3/4
Cleanup : postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 3/4
Upgraded: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64
Running scriptlet: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 3/4
Upgraded: postgresql-9.6.10-1.el8+1547+210b7007.x86_64
Cleanup : postgresql-9.6.10-1.el8+1547+210b7007.x86_64 4/4
Upgraded: postgresql-9.6.10-1.el8+1547+210b7007.x86_64
Running scriptlet: postgresql-9.6.10-1.el8+1547+210b7007.x86_64 4/4
Verifying : postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 1/4
Verifying : postgresql-9.6.10-1.el8+1547+210b7007.x86_64 2/4
Verifying : postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 3/4
Verifying : postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 4/4


Upgraded:
postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64


Complete!
[root@rhel8beta1 ~]#

And now it’s fine :
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 10.5
[root@rhel8beta1 ~]#

So what ?

That was only a first quick try with the AppStream fonctionality. What we can remember here is that with this new way to manage packages we can benefit from parallel availability of multiple versions of software. This is due to the disassociation from the kernel space (BaseOS) – which is still managed in a traditional way, and the user space (AppStream) – which is now deployed in the form of “containerized” applications.
Up to now, when we wanted to upgrade an application to a given version, we had to think about the inter-dependency between this application and the other one that we didn’t want to update. With RHEL8, we can now upgrade one while keeping the other in its current version.

Cet article Red Hat Enterprise Linux 8 – Application Streams est apparu en premier sur Blog dbi services.

Understand Oracle Text at a glance

Thu, 2018-12-13 00:19

What is Oracle Text?

Oracle Text provides indexing, word and theme searching, and viewing capabilities for text in query applications and document classification applications.

Oracle text activation for a user

create user ORATXT identified by oratxt ;
grant ctxapp to ORATXT ;
grant execute on ctxsys.ctx_cls to ORATXT ;
grant execute on ctxsys.ctx_ddl to ORATXT ;
grant execute on ctxsys.ctx_doc to ORATXT ;
grant execute on ctxsys.ctx_output to ORATXT ;
grant execute on ctxsys.ctx_query to ORATXT ;
grant execute on ctxsys.ctx_report to ORATXT ;
grant execute on ctxsys.ctx_thes to ORATXT ;
grant execute on ctxsys.ctx_ulexer to ORATXT ;

Oracle Text configuration and usage

To design an Oracle Text application, first determine the type of queries you expect to run. This enables you to choose the most suitable index for the task. There are 4 use cases with Oracle Text:

  1. Document Collection Applications
    • The collection is typically static with no significant change in content after the initial indexing run. Documents can be of any size and of different formats, such as HTML, PDF, or Microsoft Word. These documents are stored in a document table. Searching is enabled by first indexing the document collection.
    • Queries usually consist of words or phrases. Application users can specify logical combinations of words and phrases using operators such as OR and AND. Other query operations can be used to improve the search results, such as stemming, proximity searching, and wildcarding.
    • An important factor for this type of application is retrieving documents relevant to a query while retrieving as few non-relevant documents as possible. The most relevant documents must be ranked high in the result list.
    • The queries for this type of application are best served with a CONTEXT index on your document table. To query this index, the application uses the SQL CONTAINS operator in the WHERE clause of a SELECT statement.
    • Example of searching
    • SQL> select score(1), doc_id, html_content from docs where contains(html_content, 'dbi', 1) > 0;
       
      SCORE(1) ID HTML_CONTENT
      ---------- ---------- -----------------------------------------------------------
      4 1 <HTML>dbi services provide various IT services</HTML>
      4 9 <HTML>You can become expert with dbi services</HTML>
      4 3 <HTML>The compaany dbi services is in Switzerland.</HTML>

  2. Catalog Information Applications
    • The stored catalog information consists of text information, such as book titles, and related structured information, such as price. The information is usually updated regularly to keep the online catalog up to date with the inventory.
    • Queries are usually a combination of a text component and a structured component. Results are almost always sorted by a structured component, such as date or price. Good response time is always an important factor with this type of query application.
    • Catalog applications are best served by a CTXCAT index. Query this index with the CATSEARCH operator in the WHERE clause of a SELECT statement.
    • Example of searching
    • SQL> select product, price from auction where catsearch(title, 'IT', 'order by price')> 0;
       
      PRODUCT PRICE
      ----------------------------------- ----------
      IT Advice 1 hour 499
      Course IT management 3999
      License IT monitoring 199
      IT desk 810

  3. Document Classification Applications
    • In a document classification application, an incoming stream or a set of documents is compared to a pre-defined set of rules. When a document matches one or more rules, the application performs some action. For example, assume there is an incoming stream of news articles. You can define a rule to represent the category of Finance. The rule is essentially one or more queries that select document about the subject of Finance. The rule might have the form ‘stocks or bonds or earnings’.
    • When a document arrives about a Wall Street earnings forecast and satisfies the rules for this category, the application takes an action, such as tagging the document as Finance or e-mailing one or more users.
    • To create a document classification application, create a table of rules and then create a CTXRULE index. To classify an incoming stream of text, use the MATCHES operator in the WHERE clause of a SELECT statement. See Figure 1-5 for the general flow of a classification application.
    • SQL> select category_id, category_name
      from categories
      where matches(blog_string, 'Dbi services add value to your IT infrastructure by providing experts in different technologies to cover all your needs.');
       
      QUERY_ID QUERY_STRING
      ---------- -----------------------------------
      9 Expertise
      2 Advertisement
      6 IT Services

  4. XML Search Applications
    • An XML search application performs searches over XML documents. A regular document search usually searches across a set of documents to return documents that satisfy a text predicate; an XML search often uses the structure of the XML document to restrict the search.
      Typically, only that part of the document that satisfies the search is returned. For example, instead of finding all purchase orders that contain the word electric, the user might need only purchase orders in which the comment field contains electric.

In conclusion, there is various uses cases for which Oracle Text will help you with text indexation. Before implementing, verify which will best suit your need. Also, it may be interesting to compare with an external text indexer like Solr which is also able to index your database via a JDBC driver.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

Cet article Understand Oracle Text at a glance est apparu en premier sur Blog dbi services.

Transparent Data Encryption – Certificate in master database: is it really a good practice?

Tue, 2018-12-11 02:00

As encryption solution in SQL Server, Transparent Data Encryption (TDE) is simple and quick to set up. That’s why this is a common encryption mechanism.

TDE encrypts data with a certificate at the page level, before SQL Server writes on the disk. It is supposed to protect your environment from some scenarios, where SQL Server files (backups or data) are stolen.
By default the certificate used for encryption is stored in the master database. But is it really a good practice?

Let’s see with a common scenario:
– One MSSQL instance where TDE is enabled for one database
– One MSSQL instance without any certificate

Configuring TDE for a database

Create a master key:

USE master
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'P@$$w0rd'
GO

Create a certificate to use for TDE:

USE master
GO
CREATE CERTIFICATE CertinMaster
   WITH SUBJECT = 'Self-Signed Certificate in Master',   
   EXPIRY_DATE = '20241231';  
GO  

Certificate

Create a database encryption key in the database you want to encrypt, protected by the certificate:

USE dummy
GO
CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_256 ENCRYPTION BY SERVER CERTIFICATE CertinMaster
GO

Enable encryption on the database:

USE master
GO
ALTER DATABASE dummy SET ENCRYPTION ON
GO

tde

At this step:
– TDE is configured for dummy database
– Certificate used for encryption is located in the master database

Restoring encrypted database to an another instance – first try

After backing up dummy database and copying the backup to another MSSQL instance, an error is displayed when trying to restore this database with required certificate:

Restore_tde_witout_certificate_impossible

This example shows how TDE protects data from a scenario where someone has robbed your backup file.
A similar error appears if an attach file procedure is used.

Restoring encrypted database to an another instance – second try

After backing up master database, let’s try to restore it on the other MSSQL environment.
Restoring a master database is a bit more complicated than a user database, but not impossible.

First it is mandatory to set the instance in Single User mode, by adding Trace Flag ‘-m’ at startup parameters.
PowerShell code executed on target server:

$Path = "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL13.ARCTURUS\MSSQLServer\Parameters\";
New-ItemProperty -Path $Path -PropertyType String -Name SQLArg3 -Value "-m"

Note: depending on how many startup parameters you have (default is 3), adjust accordingly the item property to SQLArgX.

A restart of the instance is necessary to apply the Trace Flag. Do not start the agent service, otherwise it will connect to the instance in single user mode.

As only one connection to the instance is authorized in this mode, PowerShell is more suited to do the job for the restore:

$Query= "RESTORE DATABASE master FROM DISK = 'C:\Mountpoints\Backup\master.bak' WITH REPLACE";
Invoke-Sqlcmd -ServerInstance KERRIGAN\ARCTURUS -Query $Query"

The SQL Server is automatically stopped.

First remove the Single user mode:

$Path = "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL13.ARCTURUS\MSSQLServer\Parameters\";
Remove-ItemProperty -Path $Path -Name SQLArg3"

After disabling Single User mode, SQL Server services (Engine + Agent) can start again.

After query the new master, it is effectively holding the certificate:
Certificate_Copied

Finally after trying to restore the dummy database on this instance, it works without any trouble: data are freely accessible.

Conclusion

For a complete protection, I do not recommend to use a simple form of Transparent Data Encryption. As a best practice, I do recommend to store the encryption key on a third-party system using Extensible Key Management (EKM).

 

Cet article Transparent Data Encryption – Certificate in master database: is it really a good practice? est apparu en premier sur Blog dbi services.

Foglight

Mon, 2018-12-10 11:37

As many Oracle Database Administrators, I am a regular user of Oracle Enterprise Manager in order to monitor or administer my client’s databases. I am lucky because most of them have the diagnostic pack and the tuning pack :=).

But we always have to be curious and discover new products. As described by my colleague Nicolas Penot, I installed Foglight:

https://blog.dbi-services.com/foglight-monitoring-solution-for-databases-part-01/

https://blog.dbi-services.com/foglight-monitoring-solution-for-databases-part-02/

The installation was successful and very quick to realize, remember when you have to install OEM 13c, if you well know the product and have a powerful server, if you can install in half a day , you are pretty efficient.

I successfully discovered two Oracle databases (version 12.2.0.1 and 18.3.0.0). As you can see in the picture below, you can also monitor MySQL , Postgres, DB2 , and SQL Server.

fg1

 

The first summary database page shows a lot of useful information (process activity, memory activity, logical reads … a.s.o:

fg2

 

fg3

We have the possibility to navigate in different menus for example the Storage Tablespace Summary:

fg4

We have the possibility to display the datafiles I/O summary:

fg5

 

We can display the pluggable databases and to have an overview of workload metrics with nice graphs:

fg6

 

fg7

 

Foglight also displays the alert.log file error messages with four categories (Informational, Critical Warning or Fatal):

We create false critical messages in the alert.log file, I used dbms_system.ksdwrt() to generate false ORA-00600 or ORA_04031 error messages:

fg8

 

You can visualize , enable or disable the alarms, edit and modify the collections, the configuration is intuitive:

fg9

The report menu is also interesting, many templates for different databases are defined:

fg10

It is very difficult to make a choice between Foglight and OEM. My first opinion (but I did not test Foglight in an active production environment) is that Foglight is a monitoring solution for many types of databases (MySQL, Oracle, Postgres, SQL Server, DB2), thus OEM is a monitoring and administration tool for Oracle databases.

Effectively you can add SQL server or MySQL plugin , Medora plugin for Postgres in OEM  in order to administer SQL Server, MySQL or Postgres databases, but you will never reach the powerful functionalities offered by OEM to monitor or administer Oracle databases.

The big advantage is their threshold, notification functionality and the reporting facilities which both OEM and Foglight have.  I did not test a lot SQL PI in Foglight, and I’m asking myself if like Oracle we have the possibility to have an historical graph of what happened the week before, if we have the possibility to display the execution plan and of course to run the SQL tuning advisor to offer another execution plan for example ?

The time dedicated to administer Foglight seems to be equivalent to OEM. You cannot install those enterprise products and let them live, like OEM Foglight seems to be a complex system and requires a significant time to run effectively.

If you do not have a lot of databases and not a big budget, you can use simple SQL scripts to keep you alerted. You have several different types of database to monitor, it might be a good idea to test and use Foglight. If most of your databases are Oracle based and you have paid for the tuning and diagnostic pack, use OEM to fully benefit of the administration and monitoring console.

 

Cet article Foglight est apparu en premier sur Blog dbi services.

First immersion in the Docker Conference EU 2018

Sat, 2018-12-08 12:02

In short, a very interesting event for both Devs and Ops. Every day was organized around workshops, hands-on-labs sessions and Hallway tracks. My colleague Mehdi Bada and I tried to attend as much sessions as possible but obviously it was difficult to cover all the topics.

blog 149 - 0 - dockercon small

Anyway, workshops and hands-on-labs were very interesting especially if you like to mix theory and practice. But I had to admit sometimes we got in trouble to keep up the pace of some workshops. Regarding the workshop we ran into what I call the “Context switch” issue between following the trainer’s explanation and doing exercises at the same time :) The migrating .NET applications to Docker workshop with Elton Stoneman (Docker) was one that comes I mind in this case :)

As database specialists at dbi services we obviously had a special focus on storage-oriented topics and we were interested in attending sessions and workshops on this topic including Use Cases and Practical Solutions for Docker Container Storage on Swarm and K8s session with Don Stewart (Docker) and Mark Church (Docker) as well as Container Storage Panel Q&A with Ed Beauvais (Oracle), Chris Brandon (Storage OS Inc) and Keith Hudgins (Docker). We got an overview of different possible solutions to implement as file-based, block-based and object-based storage in order to address different pattern workloads including fileserver, OLTP, BigData etc. Container Storage Landscape is large and vendor-specific actually but Docker storage team announced some plans to introduce first snapshot / restore capabilities and to provide an CSI (Common Storage Interface) to offer a simple community driven approach and a more predictable and functional interface for most common use cases as well. Let’s see what’s happen in the future but my guess (speculation mode) is that for “specific” applications like databases, vendor storage drivers will likely remain the most viable option when performance will be at the heart of concerns.

blog 149 - 1 - storage drivers

Even if containers are formally design to handle stateless applications it is not uncommon to see databases in such infrastructure nowadays. After all databases are also (special) applications, right? I was already convinced by the fact that containerization infrastructure was now enough mature to handle database workloads, these sessions reinforced my strong belief that Docker Swarm or K8s are production database ready from a storage perspective at least.

We also got the opportunity to attend to workshops and sessions around container orchestrator topics including mainly Docker Swarm and K8s orchestration. It was interesting to see that the same question often raised by attendees during these sessions: Do we have to use Swarm over K8s and vice-versa-ca? This is also a question we are going to ask for a dbi services internal project by the way and obviously, there is no black-or-white response. What is certain is that Docker Swarm remains important for customers as confirmed by Steve Singh during the first general session on Tuesday 4th December 2018. We got feedback from customer stories like Citizens bank that an orchestrator choice depends on different factors and in the context of this customer, they are using the both from Docker EE :) We also attended to other interesting Swam and K8s infrastructure topics including Swarm Orchestration – features and workflows by Bret Fisher (Docker Captain) and Container Networking for Swarm and Kubernetes in Docker Enterprise by Guillaume Morini (Docker) as well. Finally, and probably one of my favorite workshops was troubleshooting with sysdig by Michael Ducy. Sysdig is part of well-known monitoring / troubleshooting tools for containers in the market. Let’s say that it was a subtle combination between deep dive immersion of Linux kernel principals and practical scenarios about using sysdig tools to fix container issues as confirmed by my working desktop below:

blog 149 - 2- WS sysdig

In addition to sessions, workshops and hands-on-labs, new announcements were done at the DockerCon EU 2018, during general sessions with Steve Singh (CEO) and Scott Johnston (Chief Product Officer) as main speakers.

blog 149 - 3 - dockercon general session 1

First general session announcements include new innovative tools including docker-app, docker-assemble and enhancement of docker stack support for both Swarm and Kubernetes since Docker EE 2.0 and probably the most expected one: Docker Desktop Enterprise. It turns out that the adoption of Docker Desktop from developers was a real success but not really designed to scale to Enterprise-class environment and this is basically what Docker Desktop enterprise product is supposed to address.

It was also an opportunity to get some interesting figures about Docker (EE) adoption across the world:

  • 1M of new developer
  • 5M of new applications
  • 1B of containers downloaded every week
  • 650+ customers on docker EE
  • 76.4% of companies running mission-critical apps in containers in production

The last one is by far my favorite because it highlights that most of Docker environments are not anymore developer-scoped limited environments. Moreover, it is worth noting that the other following figures seem to point out that Docker is not a visionary developer whim anymore and it drives a strong adoption for customer due to an interesting ROI:

  • 69% differentiating products and services vs competitors
  • 70% => increasing sales of product
  • 71% bringing products to market faster

Finally, let’s finish with the second general session that was more Docker community-oriented and I know how important community may be for contribution and to bring people for interaction as well. As Microsoft with MVPs, Docker Captains are the Docker counterpart and were thanked for their wonderful contribution during this event. But obviously contribution is beyond MVPs, ACEs or Docker captains and Kal De (CTO, EVP, Product Development) explained how to contribute to different Docker projects and showed then contribution figures from the community through GitHub:

  • Compose 1 MM monthly
  • 25K new compose files published on GitHub per week
  • 14K GitHub contributors – 280+ people

This first immersion in the Docker World conference was definitely a good experience and a great opportunity to feel the emphasis around Docker and future directions made by the company. I also appreciated discussions and feedbacks from some attendees during network track to prepare our future challenges on this topic.

 

 

 

Cet article First immersion in the Docker Conference EU 2018 est apparu en premier sur Blog dbi services.

PostgreSQL : Get my database server name

Fri, 2018-12-07 16:32

I was looking for a build-in function to get the hostname of the server hosting my PostgreSQL cluster.
But seems that there is no build-in function. Looking in the extensions, I find the extension hostname which can allow to get the database server host name.
In this this blog I am explaining how to install and how to use it. The installation is very easy. The first step is to download it here .

After let’s go to the directory where the archive was decompressed and let’s run the command make

master/pg-hostname-master/ [PG1] ls
Changes  doc  hostname.control  Makefile  META.json  README.md  sql  src  test
14:46:26 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-master/pg-hostname-master/ [PG1] ls
Changes  doc  hostname.control  Makefile  META.json  README.md  sql  src  test
14:46:48 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-

master/pg-hostname-master/ [PG1] make
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -I. -I./ -I/u01/app/postgres/product/95/db_0/include/server -I/u01/app/postgres/product/95/db_0/include/internal -D_GNU_SOURCE -I/usr/include/libxml2   -c -o src/hostname.o src/hostname.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -L/u01/app/postgres/product/95/db_0/lib -Wl,--as-needed -Wl,-rpath,'/u01/app/postgres/product/95/db_0/lib',--enable-new-dtags  -shared -o src/hostname.so src/hostname.o
cp sql/hostname.sql sql/hostname--1.0.0.sql
14:46:55 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-

Once done let’s run the command make install

pg-hostname-master/ [PG1] make install
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/extension'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/extension'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/lib'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/doc/extension'
/bin/install -c -m 644 .//hostname.control '/u01/app/postgres/product/95/db_0/share/extension/'
/bin/install -c -m 644 .//sql/hostname--1.0.0.sql .//sql/hostname--unpackaged--1.0.0.sql  '/u01/app/postgres/product/95/db_0/share/extension/'
/bin/install -c -m 755  src/hostname.so '/u01/app/postgres/product/95/db_0/lib/'
/bin/install -c -m 644 .//doc/hostname.mmd '/u01/app/postgres/product/95/db_0/share/doc/extension/'
14:47:29 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-master/pg-hostname-master/ [PG1]

If everything is ok we should now have the extension in our $PGHOME/share/extension

15:02:39 postgres@dbi-pg-essentials:[PG1] ls -ltra *hostname*
-rw-r--r--. 1 postgres postgres 140 Dec  5 14:47 hostname.control
-rw-r--r--. 1 postgres postgres  96 Dec  5 14:47 hostname--1.0.0.sql
-rw-r--r--. 1 postgres 

And that’s all. We just have now have to install the extension in the database

postgres=# CREATE EXTENSION hostname;
CREATE EXTENSION

And then now we can have the hostname of our the server

postgres=# SELECT hostname();
     hostname
-------------------
 dbi-pg-essentials
(1 row)

postgres=#

Cet article PostgreSQL : Get my database server name est apparu en premier sur Blog dbi services.

PostgreSQL 12 : New option –socketdir for pg_upgrade

Fri, 2018-12-07 16:30

PostgreSQL 12 is under development but tests can be done. Steps to install this version can be found in this dbi blog . Many thanks to Daniel.
While reading the documentation I found that there is a new parameter for pg_upgrade. This new parameter is –socketdir.
Why this parameter?
In fact the path name of a UNIX socket is allowed to be maximally 108 chars long. Before PostgreSQL 12, the default directory for the sockets created for the temporary postmasters started by pg_upgrade was the current directory. But depending of the current directory the pathname might be very long for a socket name. In PostgreSQL 12 The default location is still the current working directory, but the parameter socketdir now allows us to specify another location

To better understand I am going to upgrade from PostgreSQL 10 to PostgreSQL 12

20:59:44 postgres@dbi-pg-essentials:/u02/pgdata/PG12TEST/ [PG12TEST] psql -U postgres -d postgres
psql (12devel dbi services build)
Type "help" for help.

postgres=# select version();
                                                            version
---------------------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11
(1 row)

postgres=#

Now let’s create a very deeply nested directory

mkdir -p ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd

And let’s do a cd into this new directory and let’s try to do an upgrade

$ export PGDATAOLD=/u02/pgdata/PG3
$ export PGBINOLD=/u01/app/postgres/product/10/db_1/bin
$ export PGDATANEW=/u02/pgdata/PG12TEST
$ export PGBINNEW=/u01/app/postgres/product/12dev/db_0/bin

When running the pg_upgrade with the check option, we got following errors

$ pg_upgrade -c
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.

connection to database failed: Unix-domain socket path "/home/postgres/ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/.s.PGSQL.50432" is too long (maximum 107 bytes)

could not connect to source postmaster started with the command:
"/u01/app/postgres/product/10/db_1/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/u02/pgdata/PG3" -o "-p 50432 -b  -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/home/postgres/ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd'" start
Failure, exiting
$

Seems that the pathname for the socket is very long.
And if we use this new parameter, we can specify a new location for the sockets. And we can see that the checks are now successful

$ pg_upgrade -c --socketdir=/home/postgres/
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDs                               ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

*Clusters are compatible*

And then we can upgrade the cluster using this new parameter and still staying in this new created directory

$ pg_upgrade  --socketdir=/home/postgres/
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDs                               ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows in the new cluster                        ok
Deleting files from new pg_xact                             ok
Copying old pg_xact to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh
$
Conclusion :

In this blog we have seen the new option –socketdir for pg_upgrade. It’s a good thing to know that this parameter exists, but in most case the current working directory should be ok for an upgrade

Cet article PostgreSQL 12 : New option –socketdir for pg_upgrade est apparu en premier sur Blog dbi services.

DockerCon18 Barcelona – Day 3

Thu, 2018-12-06 06:37

Wednesday was the last day of the DockerCon18 Europe. Like the previous day, we started the journey by a keynote of 2 hours, more oriented “Docker Community”, the core message of the keynote, the community is one of the pillars of open source technologies and Docker Inc want to push more and more the community aspect. The community is growing very fast and competitive.

They take the opportunity to award the community leader of the year and a new Docker Captain, Bret Fisher.

dockercommunity-min

Then we attended an interesting session: Docker Storage with Swarm and Kubernetes.

The guy who presented started the session with a funny part: Container Storage Fake News!! During a few minutes, he listed one by one all fake news related to storage in the container world. The best fake news for us:

RDBMS and databases cannot be run on containers: NO! Official images are available from the providers. The best example is SQL Server, who provide a very competitive Docker image for their users.

The core message of the session is that databases containers are coming more and more and will be used and deployed. The very interesting thing is about the collaboration between Docker and storage provider, who are developing API for Docker compatibility, so in the future, each storage provider will have its own API to communicate with Docker container.
 

The last but not least session of the day for me was about Provisioning and Managing Storage for Docker Containers.

The goal of the session was the explanation of How we can manage easily storage operations for containers.

Docker EE Platform with Kubernetes (with PV and PVC) help us in the future to manage storage operations for containers.

PV = Persistent Volume
PVC = Persistent Volume Claim

They present us, also, the difference between static and dynamic provisioning in Kubernetes and the future of storage management in Docker using CSI.

Core message: Docker is making storage a priority.

oznor

Cet article DockerCon18 Barcelona – Day 3 est apparu en premier sur Blog dbi services.

DockerCon18 Barcelona – Day 2

Thu, 2018-12-06 04:59

Tuesday was the second day in Barcelona for the DockerCon18. We attend the first general session in the morning. It was a mix of presentations, live demos and the participation of Docker big customer in EMEA such as Société Général in France, who present us the impact of Docker in their daily business. The main message of the first part of the keynote was: “How Docker can help you to make the digital transformation of your business”.

In the second part, new features were presented during the live demos:

  • docker stack deployment using Docker EE
  • docker-assemble: command: build docker image without configuration starting with a git repository of the application source code.
  • docker stack command: to deploy a Docker image using a compose file.
  • docker-app command: An utility to help make Compose files more reusable and shareable.

Then they present the introduction to Kubernetes support on Docker EE platform.

Screenshot 2018-12-06 at 11.02.01

Finally, they present the way to deploy an application with Docker Desktop Application Designer.

The keynote video is also available here, those interested.

After the keynote, we attended a very interesting workshop concerning the Storage in Docker EE platform 2.1, done by Don Stewart a Solution Architect at Docker.

storagews-min

In the lab, we discovered the types of storage options that are available and how to implement them within a container environment.
Lab link for those interested: https://github.com/donmstewart/docker-storage-workshop

The first session of the afternoon was about Docker Enterprise platform 2.1: Architecture Overview and Uses Cases.

 

The presentation was split into 3 main parts:

  • Docker Enterprise overview and architecture
  • Docker Enterprise 2.1 – What’s new with demos
  • Next steps

 

The first part of the presentation was more marketing oriented, by the presentation of the Docker Enterprise platform.

Then the following new features were presented including small demos:

  • Extended Windows Support
  • Extended Kubernetes Support: Windows Server 2016, 1709, 1803, 2019
  • Improve Operational Insights: node metrics, data retention overview, more metrics, and charts…
  • Image management and storage optimizations
  • Security improvements

oznor

We finish the conference day by a workshop again, yes…Because during this conference the level and the quality of the workshops was very good and interesting. The workshop was about Swarm Orchestration – Features and Workflows.
This was definitively one of the best workshops I attended.

Slides: https://container.training/swarm-selfpaced.yml.html#1
Github repository: https://github.com/jpetazzo/container.training

During this workshop, we create a complete Docker cluster using Swarm and deep dive into Swarm orchestration.

A very interesting day, with a lot of new things around Docker.

Cet article DockerCon18 Barcelona – Day 2 est apparu en premier sur Blog dbi services.

OEM Cloud Control 13c – Agent Gold Image

Wed, 2018-12-05 06:52
Introduction

I am currently setting up a new “Base Image” virtual machine (Red Hat Enterprise Linux 7.6) which will be used to create 6 brand new Oracle database servers requested by a customer. Besides installing and configuring the OS, I also have to install 3 Oracle Homes and one Cloud Control Agent 13c.

An OMS13c server already exists including an Agent patched with the EM-AGENT Bundle Patch 13.2.0.0.181031 (28680866) :
oracle@oms13c:/home/oracle/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Tue Nov 13 17:32:48 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@oms13c:/home/oracle/ [agent13c]

However, when I wanted to deploy the CC13c Agent on my Master VM from the Cloud Control 13c web interface (Setup > Add Target > Add Targets Manually > Install Agent on Host), the Agent was successfully installed but… without the patch 28680866 :( . That means I will have to install the patch manually. Considering that the goal of creating a “Base Image” VM for this project is to quickly and easily delivering 6 database servers, having to install AND to patch the Agent on each server is not very efficient and doesn’t fit with what I want.
I had so to find a better way to deploy a patched Agent and the solution has been to use an Agent Gold Image. It allowed me to do exactly what I wanted.

In this post I will show how I have set this up.

Deploying the Agent

Here is how we can deploy the Agent on the Base Image VM. From Cloud Control 13c, we click on Setup > Add Target > Add Targets Manually > Install Agent on Host :
1

Then we insert the name of the target VM, we select the approriate platform…
2_2

…and we specify the directory in which we want to install the Agent (Agent Home) :
3_2

Everything is now ready to start the deployment. We can click on Next to see the review of the deployment configuration and on Deploy Agent to start.
Once the Agent is correctly deployed, the status should be like that :
4

As explained above we can see that the Agent is not patched with the Bundle Patch of October 2018 :
oracle@basevm:/u01/app/oracle/agent13c/agent_13.2.0.0.0/OPatch/ [agent13c] ./opatch lsinventory | grep 28680866
oracle@basevm:/u01/app/oracle/agent13c/agent_13.2.0.0.0/OPatch/ [agent13c]

We must patch it manually…

Updating OPatch

Before installing a patch it is highly recommended to update the OPatch utility first. All version of the tool are available here. The current one my VM is 13.8.0.0.0 :
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] opatch version
OPatch Version: 13.8.0.0.0


OPatch succeeded.

We must use the following command to update OPatch :
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] unzip -q p6880880_139000_Generic.zip
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] cd 6880880/
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c] $ORACLE_HOME/oracle_common/jdk/bin/java -jar ./opatch_generic.jar -silent oracle_home=$ORACLE_HOME
Launcher log file is /tmp/OraInstall2018-11-23_02-58-11PM/launcher2018-11-23_02-58-11PM.log.
Extracting the installer . . . . Done
Checking if CPU speed is above 300 MHz. Actual 2099.998 MHz Passed
Checking swap space: must be greater than 512 MB. Actual 4095 MB Passed
Checking if this platform requires a 64-bit JVM. Actual 64 Passed (64-bit not required)
Checking temp space: must be greater than 300 MB. Actual 27268 MB Passed
Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2018-11-23_02-58-11PM
Installation Summary
[...] [...] Logs successfully copied to /u01/app/oraInventory/logs.
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c] opatch version
OPatch Version: 13.9.3.3.0


OPatch succeeded.
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c]

You probably noticed that since OEM 13cR2 the way to update OPatch has changed : no more easy unzip, we have to use a Java file instead (don’t really understand why…).

Patching the Agent

As OPatch is now up to date we can proceed with the installation of the patch 28680866 :
oracle@basevm:/u01/app/oracle/software/agent13c/patch/ [agent13c] unzip -q p28680866_132000_Generic.zip
oracle@basevm:/u01/app/oracle/software/agent13c/patch/ [agent13c] cd 28680866/28680866/
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] emctl stop agent
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation. All rights reserved.
Stopping agent ... stopped.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] opatch apply
Oracle Interim Patch Installer version 13.9.3.3.0
Copyright (c) 2018, Oracle Corporation. All rights reserved.


Oracle Home : /u01/app/oracle/agent13c/agent_13.2.0.0.0
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/agent13c/agent_13.2.0.0.0/oraInst.loc
OPatch version : 13.9.3.3.0
OUI version : 13.9.1.0.0
Log file location : /u01/app/oracle/agent13c/agent_13.2.0.0.0/cfgtoollogs/opatch/opatch2018-11-23_15-33-14PM_1.log


OPatch detects the Middleware Home as "/u01/app/oracle/agent13c"


Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 28680866


Do you want to proceed? [y|n] y
User Responded with: Y
All checks passed.
Backing up files...
Applying interim patch '28680866' to OH '/u01/app/oracle/agent13c/agent_13.2.0.0.0'


Patching component oracle.sysman.top.agent, 13.2.0.0.0...
Patch 28680866 successfully applied.
Log file location: /u01/app/oracle/agent13c/agent_13.2.0.0.0/cfgtoollogs/opatch/opatch2018-11-23_15-33-14PM_1.log


OPatch succeeded.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c]

Let’s restart the Agent and check that the patch has been applied :
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] emctl start agent
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation. All rights reserved.
Starting agent ................... started.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Mon Dec 03 17:17:25 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c]

Perfect. The Agent is now patched but…

Installing the DB plugin

…what about its plugins ? We can see from the OMS13c server that the Agent doesn’t have the database plugin installed :
oracle@oms13c:/home/oracle/ [oms13c] emcli login -username=sysman
Enter password :


Login successful
oracle@oms13c:/home/oracle/ [oms13c] emcli list_plugins_on_agent -agent_names="basevm.xx.yyyy.com:3872"
The Agent URL is https://basevm.xx.yyyy.com:3872/emd/main/ -
Plug-in Name Plugin-id Version [revision]

Oracle Home oracle.sysman.oh 13.2.0.0.0
Systems Infrastructure oracle.sysman.si 13.2.2.0.0

This is normal. As no Oracle database are currently running on the VM, the DB plugin was not installed automatically during the Agent deployment. We have to install it manually using the following command :
oracle@oms13c:/home/oracle/ [oms13c] emcli deploy_plugin_on_agent -agent_names="basevm.xx.yyyy.com:3872" -plugin=oracle.sysman.db
Agent side plug-in deployment is in progress
Use "emcli get_plugin_deployment_status -plugin=oracle.sysman.db" to track the plug-in deployment status.
oracle@oms13c:/home/oracle/ [oms13c]

To check the status of the plugin installation :
oracle@oms13c:/home/oracle/ [oms13c] emcli get_plugin_deployment_status -plugin=oracle.sysman.db
Plug-in Deployment/Undeployment Status


Destination : Management Agent - basevm.xx.yyyy.com:3872
Plug-in Name : Oracle Database
Version : 13.2.2.0.0
ID : oracle.sysman.db
Content : Plug-in
Action : Deployment
Status : Success
Steps Info:
---------------------------------------- ------------------------- ------------------------- ----------
Step Start Time End Time Status
---------------------------------------- ------------------------- ------------------------- ----------
Submit job for deployment 11/23/18 4:06:29 PM CET 11/23/18 4:06:30 PM CET Success


Initialize 11/23/18 4:06:32 PM CET 11/23/18 4:06:43 PM CET Success


Validate Environment 11/23/18 4:06:44 PM CET 11/23/18 4:06:44 PM CET Success


Install software 11/23/18 4:06:44 PM CET 11/23/18 4:06:45 PM CET Success


Attach Oracle Home to Inventory 11/23/18 4:06:46 PM CET 11/23/18 4:07:04 PM CET Success


Configure plug-in on Management Agent 11/23/18 4:07:05 PM CET 11/23/18 4:07:28 PM CET Success


Update inventory 11/23/18 4:07:23 PM CET 11/23/18 4:07:28 PM CET Success


---------------------------------------- ------------------------- ------------------------- ----------
oracle@oms13c:/home/oracle/ [oms13c]

Quick check :
oracle@oms13c:/home/oracle/ emcli list_plugins_on_agent -agent_names="basevm.xx.yyyy.com:3872"
The Agent URL is https://basevm.xx.yyyy.com:3872/emd/main/ -
Plug-in Name Plugin-id Version [revision]

Oracle Database oracle.sysman.db 13.2.2.0.0
Oracle Home oracle.sysman.oh 13.2.0.0.0
Systems Infrastructure oracle.sysman.si 13.2.2.0.0


oracle@oms13c:/home/oracle/ [oms13c]

The Agent is now exactly in the state in which we want to deploy it on all 6 servers (OPatch up to date, Agent patched, DB plugin installed).
It’s now time to move forward with the creation of an Agent Gold Image.

Creating the Agent Gold image

Going back to Cloud Control we can navigate to Setup > Manage Cloud Control > Gold Agent Images :
Screenshot from 2018-12-03 21-13-14
We click on Manage All Images
6

…then on Create and we give a name to our Image :
7

Once the Image created, we must create its 1st version. We click on the Image name and then on Action > Create. From here we can select the Agent configured earlier on the VM. It will be the source of the Gold Image :
8

The creation of the Gold Agent Image and its 1st version can be also done from command-line with the following emcli command :
oracle@oms13c:/home/oracle/ [oms13c] emcli create_gold_agent_image -image_name="agent13c_gold_image" -version_name="gold_image_v1" -source_agent="basevm.xx.yyyy.com:3872"
A gold agent image create operation with name "GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042" has been submitted.
You can track the progress of this session using the command "emcli get_gold_agent_image_activity_status -operation_name=GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042"


oracle@oms13c:/home/oracle/ [oms13c] emcli get_gold_agent_image_activity_status -operation_name=GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042
Inputs
------
Gold Image Version Name : gold_image_v1
Gold Image Name : agent13c_gold_image
Source Agent : basevm.xx.yyyy.com:3872
Working Directory : %agentStateDir%/install


Status
-------
Step Name Status Error Cause Recommendation
Create Gold Agent Image IN_PROGRESS


oracle@oms13c:/home/oracle/

The Gold Agent Image is now created. We can start to deploy it on the others servers in the same way we did at the first deployment, but by selecting this time With Gold Image :
9

Once the Agent is deployed on the server we can see that OPatch is up to date :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] opatch version
OPatch Version: 13.9.3.3.0


OPatch succeeded.
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

The Agent Bundle Patch is installed :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Mon Dec 03 17:17:25 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

And the DB plugin is ready :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] ll
total 24
drwxr-xr-x. 31 oracle oinstall 4096 Dec 3 22:59 agent_13.2.0.0.0
-rw-r--r--. 1 oracle oinstall 209 Dec 3 22:32 agentimage.properties
drwxr-xr-x. 8 oracle oinstall 98 Dec 3 22:58 agent_inst
-rw-r--r--. 1 oracle oinstall 565 Dec 3 22:56 agentInstall.rsp
-rw-r--r--. 1 oracle oinstall 19 Dec 3 22:56 emctlcfg.rsp
-rw-r-----. 1 oracle oinstall 350 Dec 3 22:32 plugins.txt
-rw-r--r--. 1 oracle oinstall 470 Dec 3 22:57 plugins.txt.status
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] cat plugins.txt.status
oracle.sysman.oh|13.2.0.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.oh|13.2.0.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.db|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.db|13.2.2.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.xa|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.emas|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.si|13.2.2.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.si|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

Conclusion

Using a Gold Image drastically ease the management of OMS Agents in Oracle environments. In addition to allowing massive deployment on targets, it is also possible to manage several Gold Images with different patch levels. The hosts are simply subscribed to a specific Image and follow its life cycle (new patch, new plugins, aso…).

Think about it during your next Oracle monitoring project !

Cet article OEM Cloud Control 13c – Agent Gold Image est apparu en premier sur Blog dbi services.

odacli create-database fails on ODA X7-2HA with java.lang.OutOfMemoryError

Tue, 2018-12-04 16:06

Today I was onsite at my customer and he told me: I can no longer create databases on my ODA X7-2HA, every time I try to use odacli create-database it fails, please help.

Ok, let’s check what happens, the customer shares the Oracle Homes, he wants to create a 11.2.0.4 database:

[root@robucnoroda020 ~]# odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
157bfdf4-4430-4fb1-878e-2fb803ee54bd     OraDB11204_home1     11.2.0.4.180417 (27441052, 27338049)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured
2aaba0e6-4482-4c9f-8d98-4a9d72fdb96e     OraDB12102_home1     12.1.0.2.180417 (27338020, 27338029)     /u01/app/oracle/product/12.1.0.2/dbhome_1     Configured
ad2b0d0a-11c1-4a15-b22a-f698496cd606     OraDB12201_home1     12.2.0.1.180417 (27464465, 27674384)     /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured

[root@robucnoroda020 ~]#

Ok we try to create a 11.2.0.4 database:

[root@robucnoroda020 log]# odacli create-database -n FOO -dh 157bfdf4-4430-4fb1-878e-2fb803ee54bd -cs AL32UTF8 -y RAC -r ACFS -m
Password for SYS,SYSTEM and PDB Admin:

Job details
----------------------------------------------------------------
 ID: 1959838e-34a6-419e-94da-08b931a039cc
 Description: Database service creation with db name: FOO
 Status: Created
 Created: December 4, 2018 11:11:26 PM EET
 Message:

Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@robucnoroda020 log]#

The job was created successful, we check whats going on:

[root@robucnoroda020 log]# odacli describe-job -i 1959838e-34a6-419e-94da-08b931a039cc

Job details
----------------------------------------------------------------
 ID: 1959838e-34a6-419e-94da-08b931a039cc
 Description: Database service creation with db name: FOO
 Status: Failure
 Created: December 4, 2018 11:11:26 PM EET
 Message: DCS-10001:Internal error encountered: Failed to create the database FOO.

Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Database Service creation December 4, 2018 11:11:26 PM EET December 4, 2018 11:13:01 PM EET Failure
Database Service creation December 4, 2018 11:11:26 PM EET December 4, 2018 11:13:01 PM EET Failure
Setting up ssh equivalance December 4, 2018 11:11:26 PM EET December 4, 2018 11:11:46 PM EET Success
Creating volume dclFOO December 4, 2018 11:11:47 PM EET December 4, 2018 11:12:04 PM EET Success
Creating volume datFOO December 4, 2018 11:12:04 PM EET December 4, 2018 11:12:21 PM EET Success
Creating ACFS filesystem for DATA December 4, 2018 11:12:21 PM EET December 4, 2018 11:12:34 PM EET Success
Database Service creation December 4, 2018 11:12:34 PM EET December 4, 2018 11:13:01 PM EET Failure
Database Creation December 4, 2018 11:12:34 PM EET December 4, 2018 11:13:00 PM EET Failure

[root@robucnoroda020 log]#

Indeed, the job has failed, next we check the DCS log, there we can see the database creation failure:

2018-12-04 23:13:01,209 DEBUG [Database Service creation] [] c.o.d.c.t.r.TaskReportRecorder:  Compile task plan for ServiceJobReport
'{
  "updatedTime" : null,
  "jobId" : "1959838e-34a6-419e-94da-08b931a039cc",
  "status" : "Failure",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : 1543957886185,
  "resourceList" : [ ],
  "description" : "Database service creation with db name: FOO"
}'...

2018-12-04 23:13:01,219 DEBUG [Database Service creation] [] c.o.d.a.t.TaskServiceRequest: Task[id: 1959838e-34a6-419e-94da-08b931a039cc, jobid: 1959838e-34a6-419e-94da-08b931a039cc, TaskName: Database Service creation] call() completed.
2018-12-04 23:13:01,219 INFO [Database Service creation] [] c.o.d.a.t.TaskServiceRequest: Task[id: 1959838e-34a6-419e-94da-08b931a039cc, jobid: 1959838e-34a6-419e-94da-08b931a039cc, TaskName: Database Service creation] completed: Failure

Ok, the log don’t tell us what’s going wrong, but behind the scene odacli create-database uses dbca in the requested ORACLE Home. In the next step we check the the dbca logs:

oracle@robucnoroda020:/u01/app/oracle/cfgtoollogs/dbca/FOO/ [rdbms11204] ll
total 276
-rw-r----- 1 oracle oinstall 275412 Dec  4 23:13 trace.log
oracle@robucnoroda020:/u01/app/oracle/cfgtoollogs/dbca/FOO/ [rdbms11204]

oracle@robucnoroda020:/u01/app/oracle/cfgtoollogs/dbca/FOO/ [rdbms11204] cat trace.log

......

[main] [ 2018-12-04 23:12:57.546 EET ] [InventoryUtil.getHomeName:111]  homeName = OraDB11204_home1
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
	at oracle.xml.parser.v2.XMLDocument.createNodeFromType(XMLDocument.java:4132)
	at oracle.xml.parser.v2.XMLDocument.createElement(XMLDocument.java:2801)
	at oracle.xml.parser.v2.DocumentBuilder.startElement(DocumentBuilder.java:488)
	at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingParser.java:1616)
	at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:456)
	at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:402)
	at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:244)
	at oracle.xml.jaxp.JXDocumentBuilder.parse(JXDocumentBuilder.java:155)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:172)
	at oracle.sysman.oix.oixd.OixdDOMReader.getDocument(OixdDOMReader.java:42)
	at oracle.sysman.oic.oics.OicsCheckPointReader.buildCheckpoint(OicsCheckPointReader.java:75)
	at oracle.sysman.oic.oics.OicsCheckPointSession.<init>(OicsCheckPointSession.java:101)
	at oracle.sysman.oic.oics.OicsCheckPointIndexSession.<init>(OicsCheckPointIndexSession.java:123)
	at oracle.sysman.oic.oics.OicsCheckPointFactory.getIndexSession(OicsCheckPointFactory.java:69)
	at oracle.sysman.assistants.util.CheckpointContext.getCheckPointSession(CheckpointContext.java:256)
	at oracle.sysman.assistants.util.CheckpointContext.getCheckPoint(CheckpointContext.java:245)
	at oracle.sysman.assistants.dbca.backend.Host.cleanup(Host.java:3710)
	at oracle.sysman.assistants.dbca.backend.SilentHost.cleanup(SilentHost.java:585)
	at oracle.sysman.assistants.dbca.Dbca.execute(Dbca.java:145)
	at oracle.sysman.assistants.dbca.Dbca.main(Dbca.java:189)
[Thread-5] [ 2018-12-04 23:13:00.631 EET ] [DbcaCleanupHook.run:44]  Cleanup started
[Thread-5] [ 2018-12-04 23:13:00.631 EET ] [OracleHome.cleanupDBOptionsIntance:1482]  DB Options dummy instance sid=null
[Thread-5] [ 2018-12-04 23:13:00.631 EET ] [DbcaCleanupHook.run:49]  Cleanup ended

Ah, there is a JAVA OutOfMemory Exception, we know this from older times, we have to change the Heap Space for dbca’s Java  engine, let’s change to ORACLE_HOME and check dbca:

oracle@robucnoroda020:/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/ [rdbms11204] grep JRE_OPT dbca
JRE_OPTIONS="${JRE_OPTIONS} -DSET_LAF=${SET_LAF} -Dsun.java2d.font.DisableAlgorithmicStyles=true -Dice.pilots.html4.ignoreNonGenericFonts=true  -DDISPLAY=${DISPLAY} -DJDBC_PROTOCOL=thin -mx128m"
exec $JRE_DIR/bin/java  $JRE_OPTIONS  $DEBUG_STRING -classpath $CLASSPATH oracle.sysman.assistants.dbca.Dbca $ARGUMENTS

In the dbca script, we see that the Java Heap Space is 128MB, we change it to 512MB (and yes create a backup first):

oracle@robucnoroda020:/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/ [rdbms11204] grep JRE_OPT dbca
JRE_OPTIONS="${JRE_OPTIONS} -DSET_LAF=${SET_LAF} -Dsun.java2d.font.DisableAlgorithmicStyles=true -Dice.pilots.html4.ignoreNonGenericFonts=true  -DDISPLAY=${DISPLAY} -DJDBC_PROTOCOL=thin -mx512m"
exec $JRE_DIR/bin/java  $JRE_OPTIONS  $DEBUG_STRING -classpath $CLASSPATH oracle.sysman.assistants.dbca.Dbca $ARGUMENTS

After deleting the failed database we try again to create our database FOO:

[root@robucnoroda020 log]# odacli create-database -n FOO -dh 157bfdf4-4430-4fb1-878e-2fb803ee54bd -cs AL32UTF8 -y RAC -r ACFS -m
Password for SYS,SYSTEM and PDB Admin:

Job details
----------------------------------------------------------------
                     ID:  b289ed58-c29f-4ea8-8aa8-46a5af8ca529
            Description:  Database service creation with db name: FOO
                 Status:  Created
                Created:  December 4, 2018 11:45:04 PM EET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

Let’s check what’s going on:

[root@robucnoroda020 log]# odacli describe-job -i b289ed58-c29f-4ea8-8aa8-46a5af8ca529

Job details
----------------------------------------------------------------
                     ID:  b289ed58-c29f-4ea8-8aa8-46a5af8ca529
            Description:  Database service creation with db name: FOO
                 Status:  Success
                Created:  December 4, 2018 11:45:04 PM EET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance               December 4, 2018 11:45:05 PM EET    December 4, 2018 11:45:25 PM EET    Success
Creating volume dclFOO                   December 4, 2018 11:45:25 PM EET    December 4, 2018 11:45:42 PM EET    Success
Creating volume datFOO                   December 4, 2018 11:45:42 PM EET    December 4, 2018 11:45:59 PM EET    Success
Creating ACFS filesystem for DATA        December 4, 2018 11:45:59 PM EET    December 4, 2018 11:46:13 PM EET    Success
Database Service creation                December 4, 2018 11:46:13 PM EET    December 4, 2018 11:51:23 PM EET    Success
Database Creation                        December 4, 2018 11:46:13 PM EET    December 4, 2018 11:49:56 PM EET    Success
updating the Database version            December 4, 2018 11:51:21 PM EET    December 4, 2018 11:51:23 PM EET    Success
create Users tablespace                  December 4, 2018 11:51:23 PM EET    December 4, 2018 11:51:25 PM EET    Success

[root@robucnoroda020 log]#

Super, the database was successfully created. It seems that sometimes odacli create-database fails due to dbca memory usage. So also on ODA check your dbca logs, if your database creation wasn’t successful. If you see these Heap Space Exceptions, don’t be afraid to change dbca’s heap memory allocation.

Cet article odacli create-database fails on ODA X7-2HA with java.lang.OutOfMemoryError est apparu en premier sur Blog dbi services.

DockerCon2018 Barcelona – Day 1

Mon, 2018-12-03 17:00

As a football fan, traveling to Barcelona without watching a football game is inconceivable, so I started my travel by attending the game against Villareal in Camp Nou  8-)

FCB-min

 

DockerCon Day 1

Today, with David Barbarin, was our first day at DockerCon2018 in Barcelona. The conference is located in a huge conference center and split between types of sessions including workshops, keynotes, hands-on labs, and hallway track.

The first day was only focused on workshops, hands-on labs and hallway track in where you can meet Docker experts and exchange on multiple topics.

Interesting workshops was proposed today, but to follow a workshop you had to register on it before. Fortunately, for the latecomers, a waiting list was here based on first arrived first served.

We started by following a very interesting workshop: Migrating .NET application to Docker Containers.

DotNet_WS1-min

The instructor presents us, how to migrate a monolithic application to Docker Containers. The starting point was a simple .NET application running into a single container and steps by steps the instructors explain us, through efficient demos, how to split the different services of our application in microservices architecture. We easily had access to a lab environment hosted in Azure through an RDP connection.

The 2 hour workshop was split by the following part:

  1. Building a single .NET application in one Docker Container.
  2. Split the Home page to the rest of the website, by using two containers.
  3. Add an SQL Server database for the data with the persistent storage.
  4. Add an NGINX proxy to redirect requests for the homepage and other pages of the website
  5. Create a container for the API with a connection to the database.
  6. Add a Message queue for the website.

 

After lunch, we planned to follow another workshop concerning the storage in Docker but the session and the waiting list was full. So we decided to get started with Hands-On Labs. After signing up to the Hands-On Labs, you get access to your own hosted environment and can explore all the features and capabilities of Docker through different labs:

  • Docker for Beginners – Linux
  • Docker for Beginners – Windows
  • Docker EE – an Introduction
  • Docker Security
  • Modernizing Traditional Java Applications
  • Docker EE with Kubernetes

 

We finally end the day by attending to one of the last workshops of today: Kubernetes Security, by Dimitris Kapadinis. During this workshop, the instructor shows us, the different methods to secure a Kubernetes cluster.

Kubernetes-WS-min (1)

The workshop was composed by the following part:

  1. Create a Kubernetes cluster with Play-with-Kubernetes (or minikube locally).
  2. Create and deploy a simple web application using Kubernetes.
  3. Hack the web application by entering inside the web-app pod.
  4. Protect the application by creating a security context for Kubernetes – Securing Kubernetes components.

 

It was a very intensive and interesting first day, I learned a lot through the different workshops and labs I done, so see you tomorrow  ;-)

 

Cet article DockerCon2018 Barcelona – Day 1 est apparu en premier sur Blog dbi services.

Oooooops or how to undelete a file on an ext4 filesystem

Mon, 2018-12-03 07:06

It happens within the blink of an eye.
A delete command was executed and half a second after you hit the enter button you knew it. That was a mistake.
This is the scenario which leads to this blog entry in where I show you how you can get your files back if you are lucky…

Short summary for the desperate

If you land here you are probably in the same situation I was so here is a short summary
Extundelete did not work for me but ext4magic did and I had to compile it from the sources

  • Remount the filesystem read-only or umount it as soon as possible after the incident
  • Backup your inode table you will need it for the restore
    • debugfs -R "dump /tmp/VMSSD01.journal" /dev/mapper/VMSSD01-VMSSD01
  • Check at which time your files were still there
    • ext4magic /dev/mapper/VMSSD01-VMSSD01 -H -a $(date -d "-3hours" +%s)
  • List the files within this timepoint
    • ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -l
  • Restore the file to a different disc/mountpoint
    • ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -j /tmp/VMSSD01.journal -r -d /tmp/recover
  • Be happy and promise never doing it again
And now the hole story

So it happened that I deleted two VM images by accident. I was cleaning up my environment and there were two files centos75_base_clone-1.qcow2 and centos75_base_clone-2.qcow2: As you can see I was using a clean and good naming convention which points directly, that these are the OS image files for my “nomachine” and my “networkmaster” machine… Especially the second one with my dhcp, dns, nfs and iscsi configuration would take some time to configure again.
In the first place nothing seemed to be wrong, all VMs were running normally until I tried to restart one of them and I went from :cool: to :shock: and at the end to :oops:

I could remember, that it was very important to unmount the filesystem as quickly as possible and stop changing anything on this filesystem
umount /VMSSD01
So a solution had to be found. A short Google search brought me to a tool with the promising name “extundelete” which can be found in the CentOS repository in the actual version 0.2.4 from 2012….
So a yum install -y extundelete and a man extundelete later I tried the command
extundelete --restore-all --after $(date -d "-2 hours" +%s) /dev/mapper/VMSSD01-VMSSD01
And…. It does not work.
A cryptical core dump and no solution on google so I went from :shock: TO :cry: .
extundelete_coredump
But it was not the time to give up. With the courage of the despaired, I searched around and found the tool ext4magic. Magic never sounded better than in this right moment. The tool was newer then extundelete even when it builds on extundelete. So I downloaded and compiled the newest Version 0.3.2 (from 2014). Before you can compile the source you need some dependencies:

yum install -y libblkid \
libblkid-devel \
zerofree e2fsp* \
zlib-devel \
libbz2-devel \
bzip2-devel \
file-devel

and to add some more “Magic” you need also yum install -y perl-File-LibMagic

A short ./configure && make later I got a binary and to tell it with Star Was: “A New Hope” started growing in me.

I listed all the files deleted in the last 3 hours and there they are. At least I thought these have to be my image files:
./src/ext4magic /dev/mapper/VMSSD01-VMSSD01 -H -a $(date -d "-3hours" +%s)
ext4magic_showInode

I listed out the content on the different timestamps and found at least one of my files. The timestamp 1542797503 showed some files so I tried to list all files from an earlier timestamp and one of my missing image files showed up.
./src/ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -l
ext4magic_file2restore
My mood started getting better and better and switched from :cry: to :???:.
I tried to restore my file
./src/ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -j /tmp/VMSSD01.journal -r -d /VMSSD02/recovery/
ext4magic_restoreInProgress
My first file is back :grin: . But the tool did not stop, it recovers more and more files and my hope was growing, to get both files back. The first file was back with the original name. For the second one, it was not that clear what happened. The tool was still running and recovers file after file after file and put all in the subdirectories MAGIC-2.

I tried to cancel the recovery job and give it a shot with the recovered files.
ext4magic_file2restore_unknown
After renaming the *.unknown files I tried to boot up the VM. To my surprise the first try was successful and all my VMs were back online.

Summary
  • Do not delete your files (obviously).
  • Use a clear naming convention for all your files.
  • A lsof before deleting a supposed unused file is always a good idea.
  • ext4magic worked for me and did as promised. My files are back, the VMs are up and running again. I am happy and :cool: .

    Cet article Oooooops or how to undelete a file on an ext4 filesystem est apparu en premier sur Blog dbi services.

OEM 13c Repository view gc$metric_values_daily

Mon, 2018-12-03 03:50

Oracle OEM 13c offers the possibility to generate reports about Oracle Database Tablespace Monthly Space Usage. The displayed information is quite useful and display a lot of information:

Screen Shot 2018-11-23 at 12.09.37

 

Screen Shot 2018-11-23 at 12.15.16

 

The report displays the details for each month and for each tablespace, but the GUI is sometimes painful: you have a nice report but many tablespaces and many numbers :=(

At a client’s site, I had the following behavior: a production database had its data files distributed across three file system /u01 /u01 and /u03. And I wanted to know the evolution of the data files available size for each disk.

My first idea was to write sql requests querying the tablespace history view dba_hist_tbspc_space_usage, but this view does not contain any information about the file system, and as a consequence it is not possible to know how much space will be needed by each file system in the future.

OEM 13c offers the capability of viewing graphs for most of the metrics within OEM, but forecasting and trending capabilities are not present, and most of the category of metrics are set up for real time alerting, not for historical trending.

And I did not find any views like dba_hist_datafiles_space_usage :=)

So I checked in EM13c to verify that the file system space usage is correctly monitored. Naturally each file system is monitored and triggers a warning when the available space is below 20%, or a critical alert when it is below 5%.

As a consequence I had the idea to query the OEM 13c repository views, and this was the good solution.

At first we can query the mgmt.$alert_current to display the alerts generated on the metric_name Filesystems over target of type host:

SQL> select target_name , target_type, metric_name, metric_column, column_label,alert_state, violation_level
  2  FROM MGMT$ALERT_CURRENT
  3  where metric_name ='Filesystems'
  4  and target_guid in (SELECT target_guid
  5  FROM mgmt$target
  6* WHERE target_type ='host');

em13c.localdomain
host
Filesystems
pctAvailable
Filesystem Space Available (%)
Warning 			20

There is a lot of management repository views providing access to target, metrics and monitoring information stored in the Management Repository. Those views offer the possibility to create your own scripts to query historical data from those views.

In my case I used the gc$metric_values_daily view corresponding to the data in the EM_METRIC_VALUES_DAILY table:

SQL> desc gc$metric_values_daily
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 ENTITY_TYPE				   NOT NULL VARCHAR2(64)
 ENTITY_NAME				   NOT NULL VARCHAR2(256)
 ENTITY_GUID				   NOT NULL RAW(16)
 PARENT_ME_TYPE 				    VARCHAR2(64)
 PARENT_ME_NAME 				    VARCHAR2(256)
 PARENT_ME_GUID 			   NOT NULL RAW(16)
 TYPE_META_VER				   NOT NULL VARCHAR2(8)
 METRIC_GROUP_NAME			   NOT NULL VARCHAR2(64)
 METRIC_COLUMN_NAME			   NOT NULL VARCHAR2(64)
 COLUMN_TYPE				   NOT NULL NUMBER(1)
 COLUMN_INDEX				   NOT NULL NUMBER(3)
 DATA_COLUMN_TYPE			   NOT NULL NUMBER(2)
 METRIC_GROUP_ID			   NOT NULL NUMBER(38)
 METRIC_GROUP_LABEL				    VARCHAR2(64)
 METRIC_GROUP_LABEL_NLSID			    VARCHAR2(64)
 METRIC_COLUMN_ID			   NOT NULL NUMBER(38)
 METRIC_COLUMN_LABEL				    VARCHAR2(64)
 METRIC_COLUMN_LABEL_NLSID			    VARCHAR2(64)
 DESCRIPTION					    VARCHAR2(1024)
 SHORT_NAME					    VARCHAR2(40)
 UNIT						    VARCHAR2(64)
 IS_FOR_SUMMARY 				    NUMBER
 IS_STATEFUL					    NUMBER
 NON_THRESHOLDED_ALERTS 			    NUMBER
 METRIC_KEY_ID				   NOT NULL NUMBER(38)
 KEY_PART_1				   NOT NULL VARCHAR2(256)
 KEY_PART_2				   NOT NULL VARCHAR2(256)
 KEY_PART_3				   NOT NULL VARCHAR2(256)
 KEY_PART_4				   NOT NULL VARCHAR2(256)
 KEY_PART_5				   NOT NULL VARCHAR2(256)
 KEY_PART_6				   NOT NULL VARCHAR2(256)
 KEY_PART_7				   NOT NULL VARCHAR2(256)
 COLLECTION_TIME			   NOT NULL DATE
 COLLECTION_TIME_UTC				    DATE
 COUNT_OF_COLLECTIONS			   NOT NULL NUMBER(38)
 AVG_VALUE					    NUMBER
 MIN_VALUE					    NUMBER
 MAX_VALUE					    NUMBER
 STDDEV_VALUE					    NUMBER

The main column signification:

ENTITY_TYPE The type of the target. Host, oracle_database ENTITY_NAME The target or component name for example the hostname METRIC_GROUP_NAME Name of the metric group for example filesystems METRIC_COLUMN_NAME Name of the metric column for example available, pct KEY_PART_1 Key part 1 of composite key for example name of the file system KEY_PART_2 Key part 2 of composite key COLLECTION_TIME Collection time in target time zone AVG_VALUE Average value of metric for the day MIN_VALUE Minimum value of metric for the day MAX_VALUE Maximum value of metric for the day

So in my case I only have to execute the following SQL request:

select entity_name, 
key_part_1,
collection_time, 
avg_value as avg, 
avg_value/1000000 as percent  from gc$metric_values_daily
where metric_group_name='Filesystems' 
and metric_column_name= 'available' 
and key_part_1='/u01' 
and entity_name ='oraprod.com'
order by 1,2
/

oraprod.com     /u01          04-OCT-16  169545.44  	.380928435
oraprod.com     /u01          05-OCT-16  169572.44	    .380928435
oraprod.com     /u01          06-OCT-16  169583.44	    .380928435
…
oraprod.com     /u01           17-NOV-18  169538.44 	.172295163
oraprod.com     /u01           18-NOV-18  169538.44 	.172295163
oraprod.com     /u01           19-NOV-18  169525.94 	.172282459
oraprod.com     /u01           20-NOV-18  169498.44 	.172254512

With this result I can easily know how much space is consumed by each file system during the time. We can use Excel’s capabilities to determine the trend of the file system available size.

Conclusion:

We often forget that OEM is not only a graphical administration tool, the main interesting part of OEM is that OEM stores date at regular intervals for important metrics into the management repository. If you understand the management repository views structure, you will have extraordinary possibilities to extract historical data, build graphs in order to analyze more precisely your infrastructure.

Cet article OEM 13c Repository view gc$metric_values_daily est apparu en premier sur Blog dbi services.

AWS re:invent 2018 – Day 5 – Final day

Sat, 2018-12-01 12:34

On Friday, it’s the last day for AWS re:Invent convention in Las Vegas. I didn’t know in advance what to expect because I have to admit that even if I started to use AWS, there is still a lot to learn. And that’s why I personally think working with technology is interesting, it’s always moving and there is always something new to investigate or learn.

This morning, I attended a very interesting session about the life of a Cloud Ops Engineer with 2 people from AWS speaking about some scenarios that may happen (or already happened) in daily life of system engineers. I was quite surprised on 1 scenario where the starting point to analyze a failure was the billing console. But imagine, you get a call from a customer because the application is down and you don’t have lots of informations about it. In AWS, you pay for the services you use so you can find some resources directly on the bill. Then you can pull the string and follow different resources to get a better view. Then CloudTrail can help to see what was changed on the resources because it keeps track of API calls.

Knowing some keys services and enabling them can help a lot operation/devops team to support applications in an AWS environment: CloudTrail, CloudWatch, GuardDuty or few of them. Using CloudFormation or tools to provision the infrastructure can also help to detect drifts when there is an issue. It definitely worth waking up this morning.

I ended the conference with a new launch session about running on Amazon RDS with VMware on local datacenter. It allows customers to give the offload the management part to AWS while keeping the data on site. AWS will provide a bunch of control machines built by AWS and VMware which is AWS partner. These machines will connect using a private VPN to control machines in AWS Cloud infrastructure. Currently supporting PostgreSQL / MySQL and planning to support SQL Server and Oracle. It will be a BYOL (Bring Your Own Licence) model so it will interesting to see how to licence that environment. Aurora is not supported and with what I learned this week about the storage architecture I think it will be a difficult challenge.

With Amazon RDS on VMware and AWS Outpost which has been announced in the keynote, AWS is giving solutions for customer want to use Cloud but keep data in local datacenter. Oracle proposed Cloud at Customer to achieve a similar goal. Oracle proposal is built on hardware (engineered systems) provided and maintained by Oracle when AWS ‘ solution is built on VMware. But many companies are already running successfully VMware on their own hardware so the “fight” will be interesting.

Cet article AWS re:invent 2018 – Day 5 – Final day est apparu en premier sur Blog dbi services.

PostgreSQL 12: csv output format for psql

Sat, 2018-12-01 07:25

Getting data out of PostgreSQL in csv format is not a big issue. Using copy you can do that easily. PostgreSQL 12 will enhance psql so that you can directly return csv formatted output from a query.

Using “copy” you would do something like this:

postgres=# copy pg_database to '/var/tmp/aa.txt' with (format csv);
COPY 3
postgres=# \! cat /var/tmp/aa.txt
11788,postgres,10,6,en_US.utf8,en_US.utf8,f,t,-1,11787,598,1,1663,
1,template1,10,6,en_US.utf8,en_US.utf8,t,t,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
11787,template0,10,6,en_US.utf8,en_US.utf8,t,f,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
postgres=# 

What you can not do directly with copy is dumping a view:

postgres=# copy pg_settings to '/var/tmp/aa.txt' with (format csv);
ERROR:  cannot copy from view "pg_settings"
HINT:  Try the COPY (SELECT ...) TO variant.

Of course you can workaround that by doing it like this:

postgres=# copy (select * from pg_settings) to '/var/tmp/aa.txt' with (format csv);
COPY 309
postgres=# \! head -2 /var/tmp/aa.txt
allow_system_table_mods,off,,Developer Options,Allows modifications of the structure of system tables.,,postmaster,bool,default,,,,off,off,,,f
application_name,psql,,Reporting and Logging / What to Log,Sets the application name to be reported in statistics and logs.,,user,string,client,,,,"",psql,,,f

With PostgreSQL 12 there will be an easier way of doing that:

postgres=# \pset format csv
Output format is csv.
postgres=# select * from pg_database;
oid,datname,datdba,encoding,datcollate,datctype,datistemplate,datallowconn,datconnlimit,datlastsysoid,datfrozenxid,datminmxid,dattablespace,datacl
11788,postgres,10,6,en_US.utf8,en_US.utf8,f,t,-1,11787,598,1,1663,
1,template1,10,6,en_US.utf8,en_US.utf8,t,t,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
11787,template0,10,6,en_US.utf8,en_US.utf8,t,f,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"

… or directly when executing psql:

postgres@pgbox:/home/postgres/ [PGDEV] psql -c "select * from pg_database" --csv  postgres
oid,datname,datdba,encoding,datcollate,datctype,datistemplate,datallowconn,datconnlimit,datlastsysoid,datfrozenxid,datminmxid,dattablespace,datacl
11788,postgres,10,6,en_US.utf8,en_US.utf8,f,t,-1,11787,598,1,1663,NULL
1,template1,10,6,en_US.utf8,en_US.utf8,t,t,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
11787,template0,10,6,en_US.utf8,en_US.utf8,t,f,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
Time: 0.794 ms

This makes is very easy to pass the data to other programs for further processing. By switching to unaligned output mode you even specify the field separator:

postgres@pgbox:/home/postgres/ [PGDEV] psql -c "select * from pg_database" --csv --field-separator='##' -A postgres
oid##datname##datdba##encoding##datcollate##datctype##datistemplate##datallowconn##datconnlimit##datlastsysoid##datfrozenxid##datminmxid##dattablespace##datacl
11788##postgres##10##6##en_US.utf8##en_US.utf8##f##t##-1##11787##598##1##1663##NULL
1##template1##10##6##en_US.utf8##en_US.utf8##t##t##-1##11787##598##1##1663##{=c/postgres,postgres=CTc/postgres}
11787##template0##10##6##en_US.utf8##en_US.utf8##t##f##-1##11787##598##1##1663##{=c/postgres,postgres=CTc/postgres}
(3 rows)
Time: 2.105 ms

Cet article PostgreSQL 12: csv output format for psql est apparu en premier sur Blog dbi services.

PostgreSQL 12: log_statement_sample_rate

Fri, 2018-11-30 12:41

A common way to identify long running queries in PostgreSQL is to set log_min_duration_statement to a value that is known to cause troubles. In other words: If you know most of your statements usually execute in under a second you could set log_min_duration_statement to “2s” (which means two seconds) so that whenever a statement takes longer than two seconds it will be reported in the log file. The issue with this however is, that this can cause a lot of activity in the server log file which of course is not good for performance as well. PostgreSQL 12 will probably come with a solution to that.

The default value of log_min_duration_statement is “-1″, which means disabled:

postgres=# select version();
                                                  version                                                   
------------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

postgres=# show log_min_duration_statement;
 log_min_duration_statement 
----------------------------
 -1
(1 row)

Lets set this to 5ms and then generate some queries:

postgres=# alter system set log_min_duration_statement = '5ms';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

postgres=# show log_min_duration_statement;
 log_min_duration_statement 
----------------------------
 5ms
(1 row)

When we execute some queries that take longer than 5ms they should get reported in the log file:

postgres=# select 'select pg_sleep(0.5)' from generate_series(1,10); \gexec
       ?column?       
----------------------
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
(10 rows)

 pg_sleep 
----------
 
(1 row)

...

Looking at the log file there are exactly 10 entries:

2018-11-28 03:13:32.240 CET - 20 - 28978 - [local] - postgres@postgres LOG:  duration: 529.825 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:32.770 CET - 21 - 28978 - [local] - postgres@postgres LOG:  duration: 529.904 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:33.273 CET - 22 - 28978 - [local] - postgres@postgres LOG:  duration: 501.729 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:33.783 CET - 23 - 28978 - [local] - postgres@postgres LOG:  duration: 509.532 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:34.305 CET - 24 - 28978 - [local] - postgres@postgres LOG:  duration: 520.946 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:34.809 CET - 25 - 28978 - [local] - postgres@postgres LOG:  duration: 502.624 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:35.315 CET - 26 - 28978 - [local] - postgres@postgres LOG:  duration: 505.043 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:35.817 CET - 27 - 28978 - [local] - postgres@postgres LOG:  duration: 502.034 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:36.321 CET - 28 - 28978 - [local] - postgres@postgres LOG:  duration: 503.417 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:36.824 CET - 29 - 28978 - [local] - postgres@postgres LOG:  duration: 501.790 ms  statement: select pg_sleep(0.5)

Imagine there are thousands of statements that exceed log_min_duration_statement: This will make the log file grow quite fast and performance will suffer because writing the log file takes resources as well. PostgreSQL 12 introduces a new parameter “log_statement_sample_rate” which helps with this:

postgres=# show log_statement_sample_rate;
 log_statement_sample_rate 
---------------------------
 1
(1 row)

The default is 1 which means log all statements that exceed the log_min_duration_statement threshold. When that is too much we can now go and say: We want only 10% of these statements getting logged:

postgres=# alter system set log_statement_sample_rate=0.1;
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

postgres=# show log_statement_sample_rate;
 log_statement_sample_rate 
---------------------------
 0.1
(1 row)

Doing the same test as above again:

postgres=# select 'select pg_sleep(0.5)' from generate_series(1,10); \gexec
       ?column?       
----------------------
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
(10 rows)

 pg_sleep 
----------
 
(1 row)

...

… we only get one entry in the log file, instead of 10:

2018-11-28 03:20:36.218 CET - 34 - 28978 - [local] - postgres@postgres LOG:  duration: 504.845 ms  statement: select pg_sleep(0.5)

Nice. Btw: The commit is here.

Cet article PostgreSQL 12: log_statement_sample_rate est apparu en premier sur Blog dbi services.

AWS re:invent 2018 – Day 4

Fri, 2018-11-30 11:52

Thursday morning it’s time for the keynote from Dr Werner Vogels.

A big part of the keynote was about databases again. Maybe AWS people knows that there are still lots of DBAs out there! It’s also again a focus on Amazon Aurora (relational), Dynamo DB (NoSQL) and AWS Redshift that are replacing Oracle databases Amazon was using. There was also a focus on how S3 works to maintain the best durability for the data. I’ve seen several sessions this week mentioning S3 as the object storage for building a data lake.

I haven’t mentioned on yesterday’s keynote but there are some guest coming on stage to speak about how they are using AWS on there businesses. Today, Ethan Kaplan from Fender. He spoke about Serverless movement than is used to power the new learning guitar application from Fender. This application was born because even if there are still lots of people buying new guitars, most of them quit playing guitar after 6 months. On the other side, people that continue playing are buying 8-10 guitars so it was important to develop a solution for people to continue to play.

The important guest for me was Yuri Misnik, a representative of National Australian Bank speaking about the Cloud first strategy within the Bank. Banking is traditionally associated with legacy technologies and regulation that would be against moving to the Cloud. But they are targeting 1/3 of the applications running in AWS by the year 2020 and they are moving fast.

There were less new launches than in Andy’s keynote but there are few to mention:

  • Redshift concurrent scaling: Allow Redshift to automatically increase the capacity to avoid waits when number of queries increases. The very nice part is the burstable, for every 24 hours your cluster is running you get 1 hour credit for this new concurrent usage making it almost transparent and free for most customers according to Werner
  • Support for new languages in AWS Lambda and most important support of custom running to allow customer running almost any language they like
  • AWS Lambda layer: You can now share a library between several Lambda functions without having to copy the code again
  • AWS Well-Architected Tool based on the AWS Well-Architected framework to allow reviewing the architecture without the need of a meeting with AWS or an AWS partner

Again, the keynote consumed a big part of the day but I was able to attend 2 more sessions. The 1st was Kyle Hailey presenting Performance Insights. It’s a performance tool for Amazon RDS working for Aurora, PostgreSQL, MySQL and Oracle. SQL Server is planned and will be the next on the list. It’s very promising even if it’s currently missing SQL execution plans. The strength of the tool is that it will provide consistent GUI across different engines.

AWS Performance Insights

As an Oracle DBA, I have the feeling that the tool is inspired a lot of what has been done already quite some time ago when ASH has been implemented directly in the database. The same AAS metric is driving the analysis and it’s based on 1 second sampling interval. It’s a good news, most DBAs that are already working this way with Oracle will be able to quickly understand their load when moving to Amazon RDS. The fun part is a demo from the company Slalom that developed a connector to Performance Insights for Alexa, now you can speak to your database.

I finished the day with a workshop about CI/CD: Continuous Integration / Continuous Deployment or Delivery according to the level you reach. Most often, we prefer or we have to (validated environment) have an approval for moving to Production thus only it’s only continuous delivery. I can’t put all the details but the workshop was well organize with people able to answer questions and infrastructure running without issues. It was an occasion to deal a little bit with Opswork service for a managed Puppet server and discover CodeBuild / CodePipeline services.

On Thursday night, it’s re:Play party where nobody told people that they can’t play anymore because they are too old :-) There will be a lot of activities besides 2 different stages: 1 for live music and 1 with DJs. But on Friday morning, there will be the last sessions and it will still be interesting, stay tuned.

Cet article AWS re:invent 2018 – Day 4 est apparu en premier sur Blog dbi services.

Extend a Oracle Access Manager 12.2.1.3 with User Messaging Service

Fri, 2018-11-30 07:43

After upgrading a Oracle Access Manager from 11.1.2.3 to 12.2.1.3, one of my customer requested from me to configure the Forget Password feature. This feature requires the User Messaging Service that was part of the SOA software in previous versions. Now it is part of the Oracle Access Manager software and the WebLogic Domain can be extended with the UMS services.

Extend WebLogic Domain

[oracle@vm03 ~]$ cd /u01/app/fmw_oim_12213/
[oracle@vm03 fmw_oim_12213]$ ls
cfgtoollogs coherence domain-registry.xml em idm inventory OPatch oracle_common oraInst.loc oui user_projects wlserver
[oracle@vm03 fmw_oim_12213]$ cd oracle_common/common/bin/
[oracle@vm03 bin]$ ./config.sh

Conf1
In the next wizard, we will select the Oracle User Messaging Service Basic template that is a quick start template that defines the managed server, ums_server1, and targets all UMS components to that server.
Conf2
We keep the file store persistence store but it should be placed on a shared drive if it is planned to use a UMS WebLogic cluster.
Conf3
The OPSS was already configured in the OAM WebLogic Domain, there is noting to change in the two next wizards.
Conf4
conf5
In the next wizards we get the JDBC connections from the RCU configuration schema.
conf6
conf7
The UMS repository schema was already installed. The JDBC connections shows correctly and the JDBC connections test went smoothly.
conf8
conf9
We had no keystore configured
conf10
We had to go through the topoligy to configure the UMS server and the UMS cluster.
conf11
In the next wizard, the new ums_server1 appeared listening on port 7003. This is the default for all new WebLogic Servers in the configuration wizard. Best is to change it to the default SOA port that is 8001 (it can help avoiding some troubles with hard coded ports in some applications).
conf12
As this OAM WebLogic Domain was configured with clusters, I had to create an UMS cluster named ums_cluster.
conf13
I did not use server templatres neither dynamic servers. Nothing changed in the next two wizards.
conf14
conf15
Assigned the ums_server1 to the ums_cluster moving it from the right panel to the left panel under the ums_cluster.
conf16
The coherence cluster existed already in the OAM WebLogic Domain and the machine too. Not changes was done in the next two wizards.
conf17
conf18
The ums_server1 was attached to the already existing machine moving it from the left panel to the right panel under the existing machine.
conf19
The deployments and the services were not changed. No actions was applied on the two next wizards.
conf20
conf21
We kept the persistence file store file name as it was proposed by default.
conf22
We checked the summary and clicked the Update button to extend the OAM WebLogic Domain.
conf23
All went smoothly without errors.
conf24
After the Domain has been Extended, it was started. The first time the ums_server1 had to be started manually using the startManagedWebLogic.sh script.

Configure the UMS email driver.

We decided to use mails to send the One Time Pin token to the users requesting to reset their password.

This configuration has to be done through the Fusion Middleware Enterprise Manager Console.
EM1
Select usermessaginserver (ums_server1)
EM2
EM3
At the bottom of the page, look for the Email driver and click the configure button. The next page allows t create a new Email driver and then to configure it.
Em4
EM5
here are the parameters that needs to be set for the UMS to be able to send mails.

  • Name: MailServer
  • Capability: Send
  • Sender Address:
    Check ‘Use Default Sender Address’ and set the default email ddress: EMAIL:admin@dbi-workshop.com
  • Outgoing Mail Server: “to be filed with your email server name”
Usage

Using the Oracle Access Manager One Time Pin rest API, messages are pushed to the User Messaging Service JMS queue and sent by mails using the email driver.

My next blog will explain how to configure the Oracle Access Manager Forget Password feature using the One Time Pin sent by Mail

Cet article Extend a Oracle Access Manager 12.2.1.3 with User Messaging Service est apparu en premier sur Blog dbi services.

Pages