Feed aggregator

ODA, network interface and configure-firstnet

Yann Neuhaus - Fri, 2018-07-27 09:13

Deploying new ODA X7-2S or 2M, I have been curious how configure-firstnet would interact with the fiber and copper ethernet network interfaces. Reading documentation on the web, I could not clearly understand if it is mandatory to have the ODA connected to the LAN when performing an ODA reimage and/or running the configure-firstnet in additionnal of having ILOM connection. After digging deeper and few tests, I wanted to share my experience in this blog.

configure-firstnet

Running configure-firstnet might need a little attention as it can be run just one time. In the example below, I’m running the firstnet configuration with a VLAN.

configure first net

 network-script files

The network script files are stored in /etc/sysconfig/network-scripts.

Bounding is configured on btbond1 interface as :

  1. em2 as primary interface.
  2. active-backup mode : em3 is used as backup only and will be used if em2 is failling
    BOUNDING_OPTS=”mode=active-backup miimon=100 primary=em2″

The only officially supported bounding option on the ODA is “active-backup”. Manually updating the ifcfg-btbond1 file with BOUNDING_OPTS=”mode=4 miimon=100 lacp_rate=1″ in order to implement LACP (Link Aggregation Control Protocol) would work if your switch is configured so, but is not supported. Recommendation would be to use ODA with “active-backup” mode only.

This can be seen in the ifcfg-btbond1 configuration file.

10.80.6.17-002-btbond1

After running configure-firstnet, using a VLN, a new file ifcfg-btbond1.<vln_id> will be created. This file is having all the IP configuration (IP address, netmask, gateway). If no VLAN is used, the IP configuration will added into the ifcfg-btbond1 configuration file.

10.80.6.17-004-vlan

If we look more closely to the network configuration file, we will see em2 and em3 been configured with btbond1 as master.

10.80.6.17-001-em2&3

As we can see here there is, so far, no reference in the network configuration file as if you are using fiber or copper ethernet interface.

em2 and em3 interface : Fiber or copper?

em2 and em3 interfaces are automatically connected either on fiber or on copper ethernet according to the physical connection. Em2 and em3 interfaces would then either be using the 2 fiber ports (SFP28) or the 2 ethernet ports (NET1 – NET2).
In fact, as soon as a GBIC converter is plugged into the fiber channels and a reboot is performed, the em2 and em3 will be automatically link to the fiber adapter. And no need to have any fiber cabling.

No GBIC converter installed on the ODA

IMG_1158 (Small)

em2 and em3 interfaces would be seen as Twisted Pair.

10.80.6.17-003-no gbic

GBIC converter

IMG_1160 (Small)

GBIC converter installed on the ODA

IMG_1161 (Small)

After a server reboot, em2 and em3 interfaces would be seen as Fiber.

10.80.6.17-004 (after plugging gbic and reboot)

Conclusion

Based on this short experience, we can see that the Fiber is detected as soon as a GBIC adapter is plugged into the SFP28 interfaces. The ifconfig network scripts are totally independent of this choice. Therefore, there is no need to have the ODA em2 and em3 network connections to reimage and run the configure-firstnet. These connections will be mandatory for the next step when we will create the appliance.

 

Cet article ODA, network interface and configure-firstnet est apparu en premier sur Blog dbi services.

How to shrink tables with on commit materialized views

Yann Neuhaus - Fri, 2018-07-27 08:44

Usually it is not possible to shrink tables which are used by on commit materialized views.

The result is an ORA-10652 “Object has on-commit materialized views” error, for which in action section nothing is suggested.

There is a workaround for this error: Convert all materialized views which rely on your table to be shrinked from on-commit to on-demand views. Application must tolerate that the affected materialized views are not updated during shrinking space of the table.
The affected materialized views must be queried in dba_mviews and the sql query must be checked whether table to be shrinked is used by this materialized view.

This gives following procedure:
alter table table_name enable row movement;
alter materialized view materialized_view_name refresh on demand;
alter table table_name shrink space;
exec dbms_mview.refresh('materialized_view_name');
alter materialized view materialized_view_name refresh on commit;
alter table table_name disable row movement;

Note: For alter table enable row movement and alter table shrink space table must be accessible, otherwise locks on the table may cause delays or errors.
The statements alter materialized view on demand, exec dbms_mview.refresh and alter materialized view on commit must be executed for every materialized view which uses the table to be shrinked.

 

Cet article How to shrink tables with on commit materialized views est apparu en premier sur Blog dbi services.

Running SQL Server containers on K8s Docker for Windows CE stable channel

Yann Neuhaus - Fri, 2018-07-27 07:33

The release of Docker for Windows Stable version 18.06..0-ce-win70 comes with some great new features I looked for a while including the K8s support! That’s a pretty good news because this support has existed on Edge channel since the beginning of this year but no chance to install a beta version on my laptop from my side.

So, we get now a good opportunity to test locally our SQL Server image with a K8s single node architecture.

 

blog 141 - 1 -docker 18.06.0-ce-win72

 

One interesting point here is I may switch the context of K8s infrastructure as shown below:

blog 141 - 2 -docker k8s switch context

 

The first one (dbi8scluster) corresponds to my K8s cluster on Azure. I wrote about it some time ago and the second one is about my single K8s node on my Windows 10 laptop. So, switching to my different environments is very easy as shown below:

[dab@DBI:$]> kubectl get nodes
NAME                       STATUS     ROLES     AGE       VERSION
aks-nodepool1-78763348-0   NotReady   agent     57d       v1.9.6
aks-nodepool1-78763348-1   NotReady   agent     57d       v1.9.6
C:\Users\dab\Desktop
[dab@DBI:$]> kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
docker-for-desktop   Ready     master    1d        v1.10.3
C:\Users\dab\Desktop

 

It is also interesting to get a picture of the container installed and that run the K8s infrastructure on Docker. Assuming you already enabled the showing system containers option in advanced mode you may display all the K8s related containers as following:

[dab@DBI:$]> docker ps --format "table {{.ID}}\t {{.Names}}"
CONTAINER ID         NAMES
829a5941592e         k8s_compose_compose-7447646cf5-l5shp_docker_7e3ff6d9-908e-11e8-91f0-00155d0013a6_4
53666469f25d         k8s_compose_compose-api-6fbc44c575-7jrj8_docker_7e3ff0d0-908e-11e8-91f0-00155d0013a6_4
cd3772216e72         k8s_sidecar_kube-dns-86f4d74b45-v7mr9_kube-system_7e3cb79b-908e-11e8-91f0-00155d0013a6_4
8ae73505dfb0         k8s_dnsmasq_kube-dns-86f4d74b45-v7mr9_kube-system_7e3cb79b-908e-11e8-91f0-00155d0013a6_4
8066fbefc371         k8s_kubedns_kube-dns-86f4d74b45-v7mr9_kube-system_7e3cb79b-908e-11e8-91f0-00155d0013a6_4
2591d102e6fb         k8s_kube-proxy_kube-proxy-p9jv9_kube-system_7e43eaab-908e-11e8-91f0-00155d0013a6_4
80f6d997a225         k8s_POD_compose-7447646cf5-l5shp_docker_7e3ff6d9-908e-11e8-91f0-00155d0013a6_4
23751c4fd2fc         k8s_POD_kube-proxy-p9jv9_kube-system_7e43eaab-908e-11e8-91f0-00155d0013a6_4
f96406dedefb         k8s_POD_compose-api-6fbc44c575-7jrj8_docker_7e3ff0d0-908e-11e8-91f0-00155d0013a6_4
9149e9b91fd3         k8s_POD_kube-dns-86f4d74b45-v7mr9_kube-system_7e3cb79b-908e-11e8-91f0-00155d0013a6_4
2316ed63e2ee         k8s_kube-controller-manager_kube-controller-manager-docker-for-desktop_kube-system_120c685a17dc3d67e505450a6ea9243c_4
52defb42bbaf         k8s_kube-apiserver_kube-apiserver-docker-for-desktop_kube-system_814863b48e4b523c13081a7bb4c85f0d_4
3366ebf8f058         k8s_kube-scheduler_kube-scheduler-docker-for-desktop_kube-system_ea66a171667ec4aaf1b274428a42a7cf_4
bd903b9dce3f         k8s_etcd_etcd-docker-for-desktop_kube-system_d82203d846b07255217d0e72211752f0_4
7e650673b6d2         k8s_POD_kube-apiserver-docker-for-desktop_kube-system_814863b48e4b523c13081a7bb4c85f0d_4
24e4bfb59184         k8s_POD_kube-scheduler-docker-for-desktop_kube-system_ea66a171667ec4aaf1b274428a42a7cf_4
3422edb44165         k8s_POD_etcd-docker-for-desktop_kube-system_d82203d846b07255217d0e72211752f0_4
aeca6879906b         k8s_POD_kube-controller-manager-docker-for-desktop_kube-system_120c685a17dc3d67e505450a6ea9243c_4

 

We retrieve all the K8s components including the API server, the controller manager, the K8s scheduler, the kube-proxy and the etcd cluster database. It is not my intention to go further on this topic and I will probably get the opportunity to dig further in the next blog posts.

My first test consisted in deploying our dbi services custom image for development about SQL Server 2017 Linux on my K8s cluster node. I already did it with our dbi services production image on my previous K8s infrastructure on Azure and it could be interesting to check if we have to operate in the same way. This is at least what I expected and I was right. Just note  I didn’t perform the same test with Windows containers yet but it will be soon hopefully.

Obviously, I had to change my storage classes to StorageClassName = hostpath to point to my host storage as follows:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-data-sql
  labels:
    type: local
spec:
  storageClassName: hostpath
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /T/Docker/DMK/BACKUP
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pv-claim-data-sql
spec:
  storageClassName: hostpath
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

 

I just want to draw your attention to the hostPath because we have to apply some modifications from the initial path to be understood by K8s. On Windows side my path is T:/Docker/DMK/BACKUP and it contains a backup of my custom AdventureWorks database for our tests.

Here the command to deploy my persistence volume and the correspond persistent volume claim that will be used by my SQL Server pod:

[dab@DBI:$]> kubectl create -f .\docker_k8s_storage.yaml
persistentvolume "pv-data-sql" created
persistentvolumeclaim "pv-claim-data-sql" created

 

Then my development file didn’t change a lot as expected:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 10
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pv-claim-data-sql
      containers:
      - name: mssql
        image: dbi/dbi_linux_sql2017:2017-CU4
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_SA_PASSWORD
          value: "Password1"
          # valueFrom:
          #   secretKeyRef:
          #     name: mssql
          #     key: SA_PASSWORD 
        - name: DMK
          value: "Y"
        volumeMounts:
        - name: mssqldb
          mountPath: "/backup"
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: LoadBalancer

 

The command to deploy my SQL Server container pod and the correspond service is:

[dab@DBI:$]> kubectl create -f .\docker_k8s_sql.yaml
deployment.apps "mssql-deployment" created
service "mssql-deployment" created

 

So, let’s get a picture of my new deployed environment:

[dab@DBI:$]> kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
mssql-deployment   1         1         1            1           2m

[dab@DBI:$]> kubectl get services
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes         ClusterIP      10.96.0.1       <none>        443/TCP          1d
mssql-deployment   LoadBalancer   10.109.238.88   localhost     1433:30200/TCP   2m

[dab@DBI:$]> kubectl get pods -o wide
NAME                                READY     STATUS    RESTARTS   AGE       IP          NODE
mssql-deployment-6c69bb6f7c-pqb2d   1/1       Running   0          7m        10.1.0.39   docker-for-desktop

 

Everything seems to be deployed successfully. I use a load balancer service here but bear in mind I just have only one node. Here a description of my new deployed pod:

[dab@DBI:$]> kubectl describe pod mssql-deployment-6c69bb6f7c-pqb2d
Name:           mssql-deployment-6c69bb6f7c-pqb2d
Namespace:      default
Node:           docker-for-desktop/192.168.65.3
Start Time:     Fri, 27 Jul 2018 12:45:50 +0200
Labels:         app=mssql
                pod-template-hash=2725662937
Annotations:    <none>
Status:         Running
IP:             10.1.0.39
Controlled By:  ReplicaSet/mssql-deployment-6c69bb6f7c
Containers:
  mssql:
    Container ID:   docker://a17039dcdcb22c1b4b80c73fb17e73df90efda19ed77e215ef92bf86c9bfc538
    Image:          dbi/dbi_linux_sql2017:2017-CU4
    Image ID:       docker://sha256:2a693c121c33c390f944df7093b8c902f91940fa966ae8e5190a7a4a5b0681d2
    Port:           1433/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 27 Jul 2018 12:45:51 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      ACCEPT_EULA:        Y
      MSSQL_SA_PASSWORD:  Password1
      DMK:                Y
    Mounts:
      /backup from mssqldb (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qts2 (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  mssqldb:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pv-claim-data-sql
    ReadOnly:   false
  default-token-5qts2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5qts2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                         Message
  ----    ------                 ----  ----                         -------
  Normal  Scheduled              3m    default-scheduler            Successfully assigned mssql-deployment-6c69bb6f7c-pqb2d to docker-for-desktop
  Normal  SuccessfulMountVolume  3m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "pv-data-sql"
  Normal  SuccessfulMountVolume  3m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "default-token-5qts2"
  Normal  Pulled                 3m    kubelet, docker-for-desktop  Container image "dbi/dbi_linux_sql2017:2017-CU4" already present on machine
  Normal  Created                3m    kubelet, docker-for-desktop  Created container
  Normal  Started                3m    kubelet, docker-for-desktop  Started container

 

And finally, let’s have a look at a sample of my SQL Server log:

========================== 2018-07-27 10:46:43 Restoring AdventureWorks database OK ==========================
========================== 2018-07-27 10:46:43 Installing TSQLt ==========================
2018-07-27 10:46:44.01 spid51      Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
2018-07-27 10:46:44.02 spid51      Configuration option 'clr enabled' changed from 0 to 1. Run the RECONFIGURE statement to install.
2018-07-27 10:46:44.02 spid51      Configuration option 'clr strict security' changed from 1 to 0. Run the RECONFIGURE statement to install.
2018-07-27 10:46:44.02 spid51      Configuration option 'max server memory (MB)' changed from 2147483647 to 3072. Run the RECONFIGURE statement to install.
Configuration option 'clr enabled' changed from 0 to 1. Run the RECONFIGURE statement to install.
Configuration option 'clr strict security' changed from 1 to 0. Run the RECONFIGURE statement to install.
Configuration option 'max server memory (MB)' changed from 2147483647 to 3072. Run the RECONFIGURE statement to install.
2018-07-27 10:46:47.39 spid51      Starting up database 'dbi_tools'.
2018-07-27 10:46:47.72 spid51      Parallel redo is started for database 'dbi_tools' with worker pool size [2].
2018-07-27 10:46:47.74 spid51      Parallel redo is shutdown for database 'dbi_tools' with worker pool size [2].
Installed at 2018-07-27 10:46:47.863
2018-07-27 10:46:48.54 spid51      AppDomain 3 (dbi_tools.dbo[runtime].2) created.

(1 rows affected)

+-----------------------------------------+
|                                         |
| Thank you for using tSQLt.              |
|                                         |
| tSQLt Version: 1.0.5873.27393           |
|                                         |
+-----------------------------------------+
0
========================== 2018-07-27 10:46:49 Installing TSQLt OK ==========================
======= 2018-07-27 10:46:49 MSSQL CONFIG COMPLETED =======
2018-07-27 10:51:03.35 spid55      Using 'dbghelp.dll' version '4.0.5'
2018-07-27 10:51:10.32 spid55      Attempting to load library 'xplog70.dll' into memory. This is an informational message only. No user action is required.
2018-07-27 10:51:10.40 spid55      Using 'xplog70.dll' version '2017.140.3022' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required.

 

My SQL Server pod is mounted with my restored AdventureWorks database accessible from my /backup path inside my container. We also added to the image, the tSQLt framework for our unit tests.

Let’s connect to my SQL Server pod by using mssql-cli CLI:

C:\WINDOWS\system32>mssql-cli -S localhost -U sa -P Password1
Version: 0.15.0
Mail: sqlcli@microsoft.com
Home: http://github.com/dbcli/mssql-cli
master>

Time: 0.000s
master> select name from sys.databases;
+--------------------+
| name               |
|--------------------|
| master             |
| tempdb             |
| model              |
| msdb               |
| AdventureWorks_dbi |
| dbi_tools          |
+--------------------+
(6 rows affected)
Time: 0.406s

 

My first deployment is successful. There are plenty of topics to cover about K8s and SQL Server containers and other writes-up will come soon for sure. Stay tuned!

 

 

 

 

Cet article Running SQL Server containers on K8s Docker for Windows CE stable channel est apparu en premier sur Blog dbi services.

18c (18.3) Installation On Premises

Hemant K Chitale - Thu, 2018-07-26 21:42
Documentation by @oraclebase  (Tim Hall) on installing 18c (18.3) On Premises on OEL :

Oracle Database 18c Installation On Oracle Linux 6 (OL6) and 7 (OL7)
.
.
.




 
Categories: DBA Blogs

Number Datatype

Tom Kyte - Thu, 2018-07-26 16:26
Hi Tom, I declared the datatype of a column of a table as NUMBER & Presumed it to be Number(38) to be specific. But What I found to be strange that, it is accepting digits beyong 38 digits.ie. 1.111E+125. If the Datatype's precision is 38 and ...
Categories: DBA Blogs

External table to skip the # row

Tom Kyte - Thu, 2018-07-26 16:26
I am using External table to read a csv file, which has some rows with '#' at the beginning that need to be skipped. How can I do that? <code>ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY GPC_DATA_CSV_DIR ACCESS PARA...
Categories: DBA Blogs

Missing RMAN Duplexed location

Tom Kyte - Thu, 2018-07-26 16:26
Hi, We are carrying out some RMAN prototyping in our offline environment for duplexing backup sets to two different locations. We have added the following RMAN persistent configuration settings: CONFIGURE DEVICE TYPE DISK PARALLELISM 2; CONFIG...
Categories: DBA Blogs

Multiple analytical functions in a query

Tom Kyte - Thu, 2018-07-26 16:26
Dear Tom, Thanks for this wonderful platform where there is always opportunity to learn something new on things we think we already know. Really appreciate it. I have a query regarding the analytical functions in Oracle. Query is based on the Emp...
Categories: DBA Blogs

online redo log corruption

Tom Kyte - Thu, 2018-07-26 16:26
suppose i have 3 online redo log groups with two members on each group. Suppose any group is corrupted, then how to find the the corrupted exact group??. How to find weather the data in online redo log file are weather written or not on disk (datafil...
Categories: DBA Blogs

Oracle client for MACBOOK

Tom Kyte - Thu, 2018-07-26 16:26
When can we have full Oracle client software for Mac iOS platform as MacBook is widely used by Oracle users.
Categories: DBA Blogs

Translating Chinese to English in SQL with Microsoft Translator

Jeff Moss - Thu, 2018-07-26 13:14

In Oracle, I had a table of data which had some Chinese words that I needed to translate into English on the fly, in SQL…this is how I did that…

Microsoft have a translator facility here with the Translator Text API v3.0 to allow you to call it programmatically. I’m using Microsoft as I’m currently working on Azure – of course, there are other translation facilities available.

The API has a translate method which one needs to construct a call to. The format of the call is:

https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=xxxx&to=yyyy

…where xxxx is the from language, e.g. zh-Hans for Simplified Chinese (my case) and yyyy is the to language, e.g. en for English.

In the body of the request needs to be some JSON of the form:

[{"Text": "zzzz"}]

…where zzzz is the text that needs to be converted from Simplified Chinese to English.

Calling the API would result in a response which contains the translated text in JSON format.

So, what we need to do is create an Oracle Function which can be called from SQL passing in the text that needs translating from a selected column. The function will call the Microsoft Translator API via UTL_HTTP to translate the text and return the translated text which is then displayed in the SQL output.

Thanks to Tim Hall for this article and Lucas Jellema for this article which helped me with some of this – I just had to do a few tweaks to get things to work in my use case, namely:

  1. Set up the Oracle Wallet for using HTTPS
  2. Convert the publish_cinema_event procedure Lucas wrote to a function so I could call it in SQL
  3. Use LENGTHB instead of LENGTH to determine the length of the text to be translated due to the text being multi byte
  4. Use WRITE_RAW and UTL_RAW.CAST_TO_RAW rather than WRITE_TEXT otherwise the chinese characters get mangled
  5. Set the body text of the request to be UTF-8 by calling UTL_HTTP.SET_BODY_CHARSET

Firstly the calls to the Microsoft Translator are via HTTPS rather than HTTP so I needed to set up Oracle Wallet with keys to facilitate that. I tried to follow the instructions on Tim’s page about using Chrome to get the certificate but no matter which option I chose it wouldn’t include the keys/certificates in the output file. Instead, I chose to go onto our Linux server and do it this way (adjust to suit your paths):

mkdir -p /u01/app/oracle/admin/ORCL/wallet
openssl s_client -showcerts -connect api.cognitive.microsofttranslator.com:443 </dev/null 2>/dev/null|openssl x509 -outform DER >/u01/app/oracle/admin/ORCL/wallet/ms_translate_key.der

This seemed to work fine – at least everything else after worked and the end result was that we could call the API so whatever the above did differently to Chrome I don’t know but it worked.

I then created a wallet on the Linux server:

orapki wallet create -wallet /u01/app/oracle/admin/ORCL/wallet -pwd MyPassword -auto_login
orapki wallet add -wallet /u01/app/oracle/admin/ORCL/wallet -trusted_cert -cert "/u01/app/oracle/admin/ORCL/wallet/ms_translate_key.der" -pwd MyPassword

Now once the wallet is created I created the following function:

SET DEFINE OFF
CREATE OR REPLACE FUNCTION translate_text(p_text_to_translate in varchar2
                                         ,p_language_from in varchar2
                                         ,p_language_to in varchar2
                                         ) RETURN VARCHAR2 IS
  req utl_http.req;
  res utl_http.resp;
  url VARCHAR2(4000) := 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from='||
                          p_language_from||'&to='||p_language_to;
  buffer VARCHAR2(4000); 
  content VARCHAR2(4000) := '[{"Text": "'||p_text_to_translate||'"}]';
BEGIN
  dbms_output.put_line('URL:'||url);
  dbms_output.put_line('CONTENT:'||content);
  dbms_output.put_line('CONTENT LENGTH:'||TO_CHAR(LENGTH(content)));
  req := utl_http.begin_request(url, 'POST',' HTTP/1.1');
  utl_http.set_header(req, 'user-agent', 'mozilla/4.0'); 
  utl_http.set_header(req, 'content-type', 'application/json'); 
  utl_http.set_header(req, 'Ocp-Apim-Subscription-Key', 'OCP_APIM_SUBSCRIPTION_KEY'); 
  utl_http.set_header(req, 'Content-Length', LENGTHB(content));
  utl_http.set_body_charset(req, 'UTF-8');
  utl_http.write_raw(req,utl_raw.cast_to_raw(content));
  res := utl_http.get_response(req);
  utl_http.read_line(res, buffer);
  utl_http.end_response(res);
  RETURN buffer;
EXCEPTION
WHEN OTHERS
  THEN utl_http.end_response(res);
  RAISE;
END translate_text;
/

NOTE – The SET DEFINE OFF is important given the embedded ampersand characters. The OCP_APIM_SUBSCRIPTION_KEY value needs to have whatever is relevant for your subscription as well. You may need to set up ACLs for the user running this code – Tim and Lucas cover that in their articles.

Now to run the code, login to the database and run this to engage the wallet:

EXEC UTL_HTTP.set_wallet('file:/u01/app/oracle/admin/ORCL/wallet', NULL);

Create a test table with some Simplified Chinese in it:

create table test_chinese(chinese_text varchar2(200));
insert into test_chinese values('敏捷的棕色狐狸跳过了懒狗');
commit;

Now select the data out using the translate_text function and see what we get:

select chinese_text,translate_text(chinese_text,'zh-Hans','en') from test_chinese;

The returned translation is in JSON format but of course if you wanted you could extract the text element from it easily.

That’s it.

Oracle Named a Leader in Digital Experience Development Platforms by Leading Industry Analyst Firm

Oracle Press Releases - Thu, 2018-07-26 07:00
Press Release
Oracle Named a Leader in Digital Experience Development Platforms by Leading Industry Analyst Firm Oracle recognized among vendors that enable developers to build a portfolio of digital experiences

Redwood Shores, Calif.—Jul 26, 2018

Oracle today announced that Forrester Research has named Oracle a leader in its “The Forrester Wave™: Digital Experience Development Platforms, Q2 2018” report. This placement confirms the exceptional growth of Oracle Mobile Cloud and Oracle’s innovative no-code solution that leverages the latest emerging technologies to help developers create a unified multi-channel digital experience across platforms.

“We believe this recognition by Forrester is further validation of Oracle’s commitment to building a comprehensive suite that enables enterprises to create unique, personalized digital experiences across web, mobile and chatbots,” said Suhas Uliyar, vice president, product management, Oracle. “Users have a variety of choices in the way they engage content and enterprises can’t assume stakeholders interact on any one particular channel. Using Oracle Mobile Cloud, customers can build across multiple channels and derive insights on usage and adoption across these to personalize end user engagement.”

In the report, Forrester utilized 33 criteria to evaluate nine digital experience development platform vendors, grouping these evaluations into three main categories, including: current offering, strategy, and market presence.

The author of The Forrester Wave™, Michael Facemire wrote: “Oracle has seen great adoption of its Mobile Cloud Enterprise platform since building it from the ground up in 2014. Oracle has added to this cloud-first platform with front-end tooling around web, chat, and low-code options. A unified programming model has allowed Oracle to build solid tooling (Visual Builder Cloud Service) to expose these components to a larger audience without going down the proprietary path where other vendors have stumbled in the past.”

Working with Oracle, Mutua Madrid Open became the first ATP World Tour Masters 1000 and Premier WTA tournament to offer fans an AI-equipped chatbot, “MatchBot,” for event information, including results, services, parking and more. “We wanted to position the Mutua Madrid Open as the tournament of the 21st century,” said Gerard Tsobanian, president and CEO, Mutua Madrid Open. “Development of the MatchBot using Oracle Mobile Cloud positions us at the forefront of technology and innovation. With this new technology, we were able to provide visitors with an amazing experience—a pleasant, simpler, and faster way to get the information they wanted about the tournament.”

Part of Oracle Cloud Platform, Oracle Mobile Cloud is a complete multi-channel platform managed by Oracle to help developers and enterprises engage intelligently and contextually with customers, business partners and employees through the end user’s channel of choice. It enables customers to deliver engaging, personalized digital experiences that will delight customers across multiple channels. Not only can customers engage via mobile and web channels, but now take advantage of the next leap in technology—artificial intelligence—for all platforms and intelligent bot services. In addition to providing a platform to build engaging experiences across mobile, bots and web, users also get actionable insights via analytics that provide deep understanding of user adoption behavior and app performance across platforms, so businesses can personalize engagement and ensure that everything is running at peak performance.

Contact Info
Jesse Caputo
Oracle
+1.650.506.5967
jesse.caputo@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe, and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Jesse Caputo

  • +1.650.506.5967

How to install MariaDB 10.2.9 from binary tarball on CentOS 7 ?

Yann Neuhaus - Thu, 2018-07-26 03:22

 

In this blog, I will show you how to install it from binary tarball. This installation will be a VirtualBox machine (2 GB RAM , 20 GB of disk)

There are several ways to install MariaDB on your Linux:

  • rpm
  • binary tarball
  • building it from source

Prerequisites:

First ssh to your linux server

Update it:

 [root@deploy mariadb]$ yum update -y

Install wget:

[root@deploy mariadb]$ yum install wget
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.spreitzer.ch

Remove all existing mariadb packages

[root@deploy ~]$ rpm -qa | grep mariadb
mariadb-libs-5.5.56-2.el7.x86_64
[root@deploy ~]$ yum remove mariadb-libs-5.5.56-2.el7.x86_64
Loaded plugins: fastestmirror
Resolving Dependencies

Create the directory for the binaries

[root@deploy ~]$ mkdir /mariadbBinaries

Create the directory for the datas

[root@deploy ~]$ mkdir /mariadbData/mysqld1 -p

Create a directory where we are going to put the my.cnf configuration file for mariaDB

[root@deploy ~]$ mkdir /mariadbConf
[root@deploy ~]$ touch /mariadbConf/my.cnf

Change the ownership of the directories to mysql:

[root@deploy ~]$ chown -R mysql:mysql /mariadbBinaries/ /mariadbConf/ /mariadbData/

Create the mysql group and user:

[root@deploy ~]$ groupadd mysql
[root@deploy ~]$ useradd -g mysql mysql
[root@deploy ~]$ passwd mysql
[root@deploy ~]$ echo "mysql ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
[root@deploy ~]$ su - mysql
[mysql@deploy ~]$ cd /mariadbBinaries/
[mysql@deploy mariadbBinaries]$ [mysql@deploy mariadbBinaries]$ wget https://downloads.mariadb.org/interstitial/mariadb-10.2.9/bintar-linux-x86_64/mariadb-10.2.9-linux-x86_64.tar.gz

Check our tarball is here:

[mysql@deploy mariadbBinaries]$ ls -l
total 442528
-rw-rw-r--. 1 mysql mysql 453146319 Sep 25 22:39 mariadb-10.2.9-linux-x86_64.tar.gz

Let’s detar the tarball:

[mysql@deploy mariadbBinaries]$ tar zxvf mariadb-10.2.9-linux-x86_64.tar.gz
(omitted output)

Create a symbolic link between /mariadbConf/my.cnf and /etc/my.cnf

[mysql@deploy ~]$ sudo ln -s /mariadbConf/my.cnf /etc/my.cnf



 

 

Cet article How to install MariaDB 10.2.9 from binary tarball on CentOS 7 ? est apparu en premier sur Blog dbi services.

How to deploy an AWS infrastructure using CloudFormation?

Yann Neuhaus - Thu, 2018-07-26 03:13

# What is Cloud formation

As you know, one of the big challenges of cloud computing is to deploy infrastructure as fast as possible and in the most easy way.
Of course, this challenge is more than accomplished but when it comes to deploy a large infrastructure, we need another tool if there are not native to the cloud provider which is called Orchestration tool. Below an overview of some cloud providers and its native orchestration tool:

 

 

 

Cet article How to deploy an AWS infrastructure using CloudFormation? est apparu en premier sur Blog dbi services.

list reports for a given dates range

Tom Kyte - Wed, 2018-07-25 22:06
<b></b>Hello there!! i am new in PL SQL so i have been given a task to develop a procedure by joining two tables and the program should give a report of invoices within a given range of dates.I want my program to show reports as per my specificati...
Categories: DBA Blogs

how to return result set from stored procedure.

Tom Kyte - Wed, 2018-07-25 22:06
We are developing an intranet application using Asp IIS and oracle 8i. I want to return a resultset from oracle stored procedure to Asp page. How do I do it? Like in sybase you say create procedure test (parameters) resultset (col1,col2 ) Lik...
Categories: DBA Blogs

PLSQL Performance Tuning for conditional logic

Tom Kyte - Wed, 2018-07-25 22:06
I have written a validation parser which gets called using webservice (meaning need high performance code to be run), I have added the Example link in Live sql where I have added conditional clauses (for null checking several input parameters and ...
Categories: DBA Blogs

Oracle 18c database is released for Linux (on premise)

Dietrich Schroff - Wed, 2018-07-25 14:12
After my posting nearly a week ago about the published 18c documentation on monday the binaries for Oracle 18c database were released:


and

 Here the link to the download page: oracle.com

In one of the next postings i will try a rpm installation...

Oracle TimesTen Scaleout Sets New Performance Standard for In-Memory Databases

Oracle Press Releases - Wed, 2018-07-25 07:00
Press Release
Oracle TimesTen Scaleout Sets New Performance Standard for In-Memory Databases World’s fastest transaction processing database delivers unmatched speed and scalability for extreme OLTP applications, such as real-time trading, real-time telecommunications billing, IoT, and real-time fraud detection

Redwood Shores, Calif.—Jul 25, 2018

Empowering global organizations to manage growing transaction workloads, Oracle today announced Oracle TimesTen Scaleout, the world’s fastest Online Transaction Processing (OLTP) database. Oracle TimesTen Scaleout, built on a proven enterprise architecture, is capable of massive scaleout, extreme transaction processing throughput, and ultra-low response times.

TimesTen Scaleout was designed specifically to address real-time business workloads that handle an immense volume of data transactions, such as real-time trading, real-time telecommunications billing, IoT, and real-time fraud detection. Delivering breakthrough high throughput read and write speeds, TimesTen Scaleout achieves an impressive 144 million SQL transactions/second and 1.2 billion SQL statements/second running the Telecom Provider Throughput Benchmark (TPTBM) on commodity hardware.

"We are proud to announce the release of TimesTen Scaleout, a new scaleout in-memory database for OLTP workloads," said Andrew Mendelsohn, executive vice president, Oracle Database.  “Since it is based on the mature and time-tested TimesTen In-Memory Database, TimesTen Scaleout has both extensive sophisticated functionality, as well as incredible performance. This scaleout architecture is designed for extreme performance OLTP workloads and further extends Oracle's in-memory database technology leadership."

Built on Oracle’s unmatched experience delivering industrial-strength database offerings, Oracle TimesTen Scaleout offers always-on operations and instant scalability. It includes a highly available architecture with multiple active copies of data that are automatically kept in sync. Queries and transactions can be initiated from, and executed on, any replica, ensuring database users always have access to data. With instant scalability, organizations can expand or shrink database instances and resources as business demands change.

“Our marketing service system was successfully deployed under the new TimesTen Scaleout architecture with almost no application code changes,” said Tang Tang, head of the construction and maintenance department at Chongqing Mobile (China Mobile). “The entire system has not only improved performance by more than three times, but also successfully supported a number of new high concurrency business modules. This fully demonstrates that Oracle TimesTen Scaleout is an excellent distributed relational in-memory database product for OLTP SQL-based applications.”

“Processing business transactions with large workloads with speed and agility is critical for making real-time business decisions, particularly those related to connected IoT devices, fraud detection and billing,” said Julian Dontcheff, Global Oracle Technology Lead for Accenture. “Having teamed with Oracle for more than two decades to deliver the latest technology innovations to our joint clients to help them accelerate digital transformation, we look forward to leveraging TimesTen Scaleout as an important component of our data architecture toolkits.”

Oracle TimesTen Scaleout supports parallel SQL execution, global secondary indexes, and full-featured multi-statement distributed transactions. It features Oracle Database compatible data types, languages, and APIs, including SQL and PLSQL, the Oracle Call Interface (OCI), JDBC, among others. The system also allows for multiple data distribution options to accommodate a wide variety of application requirements and provides centralized management capabilities that can be accessed via Oracle SQL Developer.

TimesTen Scaleout is just the latest Oracle database innovation. To learn more about recent database solutions, including the Oracle Autonomous Data Warehouse, please visit here.

Oracle Data Management

Oracle provides the industry's most complete family of data management software products: Oracle RDBMS, Oracle TimesTen Scaleout In-Memory Database, Oracle NoSQL Database, MySQL, and Berkeley DB.  Oracle's new Autonomous Database Cloud provides the industry's first self-driving, self-securing, and self-repairing database cloud service. It's built on unique Oracle data management technologies, including Oracle Exadata Database Machine, Oracle Database 18c, Oracle Real Application Clusters, and Oracle Multitenant, plus algorithms using artificial intelligence to make it autonomous.

Contact Info
Dan Muñoz
Oracle
+1.650.506.2904
dan.munoz@oracle.com
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Dan Muñoz

  • +1.650.506.2904

Nicole Maloney

  • +1.650.506.0806

Oracle Retail Breaks Down Inventory Barriers with Cloud Service

Oracle Press Releases - Wed, 2018-07-25 07:00
Press Release
Oracle Retail Breaks Down Inventory Barriers with Cloud Service Inventory Cloud Service Enables Retailers to Connect Customers with Items, Sizes, and Colors, No Matter Where the Item Resides

Redwood Shores, Calif.—Jul 25, 2018

To empower retailers to more easily track, access and manage inventory in store Oracle Retail has introduced a new Store Inventory Operations Cloud Service. By providing the ability to view inventory by size, color and other key attributes, the Oracle Retail Cloud Service helps retailers improve customer satisfaction and fulfill demand regardless of channel.

“With Oracle Retail Store Inventory Operations Cloud Service, we are providing retailers with the ability to have real-time inventory visibility to enable core merchandising operations, ecommerce and order management” said Jeff Warren, vice president, Oracle Retail. “The integration points provide the level of inventory visibility and fulfillment transparency that makes a tremendous difference to the customer and to the business.”

For fashion retailers, customer satisfaction often comes down to simply being able to locate and deliver an item in the size and color that the customer wants. Traditionally, store associates endeavor to fulfill customer requests for an elusive size, or color, by looking online or calling fellow stores. This process can be time-consuming and ineffective, consuming resources across a business that already operates on narrow margins.

Oracle Retail Store Inventory Operations Cloud Service solves this problem and provides the retail community with:

  • “Store Up” Vs “Warehouse Down” Approach to Inventory Management: By providing the ability to have visibility into inventory at a detailed level within the store, combined with a view of warehouse inventory, the Oracle Retail Store Inventory Operations Cloud Service enables stores to influence inventory sell through by requesting it directly from the warehouse.

  • Item Visibility by Size, Color, and Location. The Oracle Retail Store Inventory Operations Cloud Service is designed to address fashion or hardline retailers’ unique need for item visibility, and caters to them by enabling identifying items at the color and size level as well as by the item’s exact location within stores across the store network.

  • Access to Item Inventory Positions Across All Locations, to Meet Fulfillment for All Channels. With visibility to where an item resides, comes the ability to drive increased customer satisfaction, complete customer transactions and fulfill orders quickly and efficiently. While retailers often are asked to find and ship an item to customers, the task can consume an inordinate amount of time and resources with a lack of connectivity and transparency. By streamlining inventory management at the store, the Oracle Retail solution helps retailers to improve customer service and drive increased profitability.

  • Mobile, Intuitive User Interface Reduces Training for Store Associates. Oracle Retail Store Inventory Operations Cloud Service is designed to drive increased efficiencies across the store inventory processes. The intuitive user interface can reduce training time, and increases productivity and return on employee investment at a faster rate than ever before.

  • Faster Rollout, Lower Total Cost of Ownership. Oracle Retail Store Inventory Operations Cloud Service provides retailers the ability to deploy the store inventory solution faster, while taking advantage of future investment through frequent service updates.  The Store Inventory Operations Cloud Service plays a key role in Oracle Retail’s overall commitment to ensure a timely, common view of inventory across the retail network—an imperative for successful omnichannel fulfillment.

Contact Info
Matt Torres
Oracle
415-595-1584
matt.torres@oracle.com
About Oracle

The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 415-595-1584

Pages

Subscribe to Oracle FAQ aggregator