Feed aggregator

When does PostgreSQL create the table and index files on disk?

Yann Neuhaus - Sun, 2018-08-05 07:26

A question that pops up from time to time is: When we create a table or an index in PostgreSQL are the files on disk created immediately or is this something that happens when the first row is inserted? The question mostly is coming from Oracle DBAs because in Oracle you can have deferred segment creation. In PostgreSQL there is no parameter for that so lets do a quick test.

We start with a simple table:

postgres=# create table t1 ( a int );

To get the real file name we can either use the pg_relation_filepath function:

postgres=# select pg_relation_filepath('t1');
(1 row)

… or we can use the oid2name utility:

postgres@pgbox:/home/postgres/ [PG10] oid2name -d postgres -t t1
From database "postgres":
  Filenode  Table Name
     33933          t1

Now we can easily check if that file is already existing:

postgres@pgbox:/home/postgres/ [PG10] ls -la $PGDATA/base/33845/33933
-rw-------. 1 postgres postgres 0 Jul 24 07:47 /u02/pgdata/10/PG103/base/33845/33933

It is already there but empty. The files for the visibility map and the free space map are not yet created:

postgres@pgbox:/home/postgres/ [PG10] ls -la $PGDATA/base/33845/33933*
-rw-------. 1 postgres postgres 0 Jul 24 07:47 /u02/pgdata/10/PG103/base/33845/33933

What happens when we create an index on that empty table?

postgres=# create index i1 on t1 (a);
postgres=# select pg_relation_filepath('i1');
(1 row)
postgres=# \! ls -la $PGDATA/base/33845/33937
-rw-------. 1 postgres postgres 8192 Jul 24 08:06 /u02/pgdata/10/PG103/base/33845/33937

The file is created immediately as well but it is not empty. It is exactly one page (my blocksize is 8k). Using the pageinspect extension we can confirm that this page is just for metadata information:

postgres=# create extension pageinspect;
postgres=# SELECT * FROM bt_metap('i1');
 magic  | version | root | level | fastroot | fastlevel 
 340322 |       2 |    0 |     0 |        0 |         0
(1 row)
postgres=# SELECT * FROM bt_page_stats('i1', 0);
ERROR:  block 0 is a meta page

The remaining questions is: When will the free space map and the visibility map be created? After or with the first insert?

postgres=# insert into t1 (a) values (1);
postgres=# \! ls -la $PGDATA/base/33845/33933*
-rw-------. 1 postgres postgres 8192 Jul 24 08:19 /u02/pgdata/10/PG103/base/33845/33933

Definitely not. The answer is: vacuum:

postgres=# vacuum t1;
postgres=# \! ls -la $PGDATA/base/33845/33933*
-rw-------. 1 postgres postgres  8192 Jul 24 08:19 /u02/pgdata/10/PG103/base/33845/33933
-rw-------. 1 postgres postgres 24576 Jul 24 08:22 /u02/pgdata/10/PG103/base/33845/33933_fsm
-rw-------. 1 postgres postgres  8192 Jul 24 08:22 /u02/pgdata/10/PG103/base/33845/33933_vm

Hope that helps …


Cet article When does PostgreSQL create the table and index files on disk? est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Docbroker & Licences

Yann Neuhaus - Sat, 2018-08-04 14:20

In a previous blog, I quickly went through the different things to know about the silent installations as well as how to install the CS binaries. Once the CS binaries are installed, you can install/configure a few more components. On this second blog, I will continue with:

  • Documentum docbroker/connection broker installation
  • Configuration of a Documentum licence


1. Documentum docbroker/connection broker installation

As mentioned in the previous blog, the examples provided by Documentum contain almost all possible parameters but for this section, only a very few of them are required. The properties file for a docbroker/connection broker installation is as follow:

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Docbroker.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Docbroker.properties
### Silent installation response file for a Docbroker

### Action to be executed

### Docbroker parameters

### Common parameters

[dmadmin@content_server_01 ~]$


A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • SERVER.CONFIGURATOR.LICENSING: Whether or not you want to configure a licence using this properties file. Here since we just want a docbroker/connection broker, it is obviously false
  • SERVER.CONFIGURATOR.REPOSITORY: Whether or not you want to configure a docbase/repository. Same here, it will be false
  • SERVER.CONFIGURATOR.BROKER: Whether or not you want to configure a docbroker/connection broker. That’s the purpose of this properties file so it will be true
  • SERVER.DOCBROKER_ACTION: The action to be executed, it can be either CREATE, UPGRADE or DELETE. You can upgrade a Documentum environment in silent even if the source doesn’t support the silent installation/upgrade as long as the target version (CS 7.3, CS 16.4, …) does
  • SERVER.DOCBROKER_PORT: The port the docbroker/connection broker will listen to (always the native port)
  • SERVER.DOCBROKER_NAME: The name of the docbroker/connection broker to create/upgrade/delete
  • SERVER.PROJECTED_DOCBROKER_HOST: The hostname to use for the dfc.properties projection for this docbroker/connection broker
  • SERVER.PROJECTED_DOCBROKER_PORT: The port to use for the dfc.properties projection related to this docbroker/connection broker. It should obviously be the same as “SERVER.DOCBROKER_PORT”, don’t ask me why there are two different parameters for that…
  • SERVER.DOCBROKER_CONNECT_MODE: The connection mode to use for the docbroker/connection broker, it can be either native, dual or secure. If it is dual or secure, you have 2 choices:
    • Use the default “Anonymous” mode, which is actually not really secure
    • Use a real “SSL Certificate” mode, which requires some more parameters to be configured (and you need to have the keystore and truststore already available):
      • SERVER.USE_CERTIFICATES: Whether or not to use SSL Certificate for the docbroker/connection broker
      • SERVER.DOCBROKER_KEYSTORE_FILE_NAME: The name of the p12 file that contains the keystore
      • SERVER.DOCBROKER_KEYSTORE_PASSWORD_FILE_NAME: The name of the password file that contains the password of the keystore
      • SERVER.DOCBROKER_CIPHER_LIST: Colon separated list of ciphers to be enabled (E.g.: EDH-RSA-AES256-GCM-SHA384:EDH-RSA-AES256-SHA)
      • SERVER.DFC_SSL_TRUSTSTORE: Full path and name of the truststore to be used that contains the SSL Certificate needed to trust the targets
      • SERVER.DFC_SSL_TRUSTSTORE_PASSWORD: The password of the truststore in clear text
      • SERVER.DFC_SSL_USE_EXISTING_TRUSTSTORE: Whether or not to use the Java truststore or the 2 above parameters instead
  • START_METHOD_SERVER: Whether or not you want the JMS to be re-started again once the docbroker/connection broker has been created. Since we usually create the docbroker/connection broker just before creating the docbases/repositories and since the docbases/repositories will anyway stop the JMS, we can leave it stopped there
  • MORE_DOCBASE: Never change this value, it should remain as false as far as I know
  • SERVER.CONGINUE.MORECOMPONENT: Whether or not you want to configure some additional components. Same as above, I would always let it as false… I know that the name of this parameter is strange but that’s the name that is coming from the templates… But if you look a little bit on the internet, you might be able to find “SERVER.CONTINUE.MORE.COMPONENT” as well… So which one is “correct”, which one isn’t is still a mystery for me. I’m using the first one but since I always set it to false, that doesn’t have any impact for me and I never saw any errors coming from the log files or anything.


Once the properties file is ready, you can install the docbroker/connection broker using the following command:

[dmadmin@content_server_01 ~]$ $DM_HOME/install/dm_launch_server_config_program.sh -f /tmp/dctm_install/CS_Docbroker.properties


That’s it, after a few seconds, the prompt will be returned and the docbroker/connection broker will be installed with the provided parameters.


2. Configuration of a Documentum licence

Once you have a docbroker/connection broker installed, you can configure/enable a certain amount of licences (actually you could have done it before). For this example, I will only enable the TCS but you can do it for all others too. The properties file for a licence configuration is as follow:

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Licence.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Licence.properties
### Silent installation response file for a Licence

### Action to be executed

### Licensing parameters

### Common parameters

[dmadmin@content_server_01 ~]$


A short description of these properties – compared to the above ones:

  • SERVER.CONFIGURATOR.LICENSING & SERVER.CONFIGURATOR.BROKER: This time, we will obviously set the broker to false and the licensing to true so we do not re-install another docbroker/connection broker
  • Licences:
    • SERVER.TCS_LICENSE: Licence string to enable the Trusted Content Services on this CS
    • SERVER.XHIVE_LICENSE: Licence string to enable the XML Store Feature
    • SERVER.AS_LICENSE: Licence string to enable the Archive Service
    • SERVER.CSSL_LICENSE: Licence string to enable the Content Storage Service Licence
    • aso… Some of these licences require more parameters to be added (XHIVE: “XHIVE.PAGE.SIZE”, “SERVER.ENABLE_XHIVE”, “SERVER.XHIVE_HOST”, aso…)


It might make sense to enable some licences during the installation of a specific docbase/repository so then that would be up to you to think about this. In the above example, I only enabled the TCS so it is available to all docbases/repositories that will be installed on this Content Server. Therefore, it makes sense to do separately, before the installation of the docbases/repositories.

You now know how to install and configure a docbroker/connection broker as well as how to enable licences using the silent installation provided by Documentum


Cet article Documentum – Silent Install – Docbroker & Licences est apparu en premier sur Blog dbi services.

Moving partitioned table and index to multiple tablespaces

Tom Kyte - Sat, 2018-08-04 02:06
I have a 10 billion row table, partitioned 32 ways. Each partition is located in a separate tablespace. Each of the 12 local partitioned indexes occupies its own tablespace as well (yes, total of 44 tablespaces). We are moving all application ta...
Categories: DBA Blogs

JSON_TABLE from array - not nested

Tom Kyte - Sat, 2018-08-04 02:06
The following json_table works perfectly well. The object "testing" holds an array, and data from the array is fetcehd as two rows, when the path is "$.testing[*]" <code>select j.* from json_table ( ' {"testing": [ { "message": "Th...
Categories: DBA Blogs

Date Intersection

Tom Kyte - Sat, 2018-08-04 02:06
Hi Team, Could you please have a look ate below scenario and help me with building SQL please.. If there are intersecting date range, rows which has longer date interval should get returned. i.e. One of the record for order 1 has date interval f...
Categories: DBA Blogs

ETL schemas on a production database

Tom Kyte - Sat, 2018-08-04 02:06
I need to support an ETL app that will use a production database clone as the data source. I'd like to put the ETL app's two schemas (target tables and pl/sql) on the production database, to limit ETL app set up that needs to be done on the clone aft...
Categories: DBA Blogs

Tips and Tricks for List of Values in Visual Builder Cloud Service

Shay Shmeltzer - Fri, 2018-08-03 17:39

While working on some customers' applications, I ran into a few performance and functionality tips related to list of values in Visual Builder Cloud Service (VBCS). While it is very tempting to use the built in quick start that binds a list to results from a service, in some cases you might want to take a different approach.

One reason is performance - some lists don't change very often - and it makes more sense to fetch them only once instead of at every entry into a page. VBCS offers additional scopes for variables - and in the demo below I show how to use an application scope to only fetch a list of countries once (starting at 1:15). I also show how to define an array that will store the values of the list in a way that will allow you to access them from other locations in your app, and not just the specific UI component.

The other scenario that the demo shows relates to situations where you need to get additional information on the record you selected in the list. For example, your list might have the code and label but might contain additional meaningful fields. What if you need access to those values for the selected record?

In the demo below (6:40), I use a little JavaSciprt utility method that I add to the page to get the full details of the selected record from the list. The code used is (replace the two bold names with the id field and the value you want to return):

PageModule.prototype.FindThreeLetter = function(list,value) {
 return  list.find(record => record.alpha2_code === value).alpha3_code;

In the past, any array used for an LOV had to have "label" and "code" fields, but Oracle JET now allows you to set other fields to act in those roles this is shown at 5:54 using the options-keys property of the list component - a combobox in my case.

Check it out:

Categories: Development

Oracle 18c preinstall RPM on RedHat RHEL

Yann Neuhaus - Fri, 2018-08-03 17:03

The Linux prerequisites for Oracle Database are all documented but using the pre-install rpm makes all things easier. Before 18c, this was easy on Oracle Enterprise Linux (OEL) but not so easy on RedHat (RHEL) where the .rpm had many dependencies on OEL and UEK.
Now that 18c is there to download, there’s also the 18c preinstall rpm and the good news is that it can be run also on RHEL without modification.

This came to my attention on Twitter:

On the other hand, you may not have noticed that it no longer requires Oracle Linux specific RPMs. It can now be used on RHEL and all its derivatives.

— Avi Miller (@AviAtOracle) July 29, 2018

And of course this is fully documented:

In order to test it I’ve created quickly a CentOS instance on the Oracle Cloud:

I’ve downloaded the RPM from the OEL7 repository:

[root@instance-20180803-1152 opc]# curl -o oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm https ://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-18c-1.0-1 .el7.x86_64.rpm
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 18244 100 18244 0 0 63849 0 --:--:-- --:--:-- --:--:-- 63790

then ran the installation:

[root@instance-20180803-1152 opc]# yum -y localinstall oracle-database-preinstall-18c-1.0-1.el7.x86_ 64.rpm

It installs automatically all dependencies:
oracle-database-preinstall-18c.x86_64 0:1.0-1.el7
Dependency Installed:
compat-libcap1.x86_64 0:1.10-7.el7 compat-libstdc++-33.x86_64 0:3.2.3-72.el7 glibc-devel.x86_64 0:2.17-222.el7 glibc-headers.x86_64 0:2.17-222.el7
gssproxy.x86_64 0:0.7.0-17.el7 kernel-headers.x86_64 0:3.10.0-862.9.1.el7 keyutils.x86_64 0:1.5.8-3.el7 ksh.x86_64 0:20120801-137.el7
libICE.x86_64 0:1.0.9-9.el7 libSM.x86_64 0:1.2.2-2.el7 libXext.x86_64 0:1.3.3-3.el7 libXi.x86_64 0:1.7.9-1.el7
libXinerama.x86_64 0:1.1.3-2.1.el7 libXmu.x86_64 0:1.1.2-2.el7 libXrandr.x86_64 0:1.5.1-2.el7 libXrender.x86_64 0:0.9.10-1.el7
libXt.x86_64 0:1.1.5-3.el7 libXtst.x86_64 0:1.2.3-1.el7 libXv.x86_64 0:1.0.11-1.el7 libXxf86dga.x86_64 0:1.1.4-2.1.el7
libXxf86misc.x86_64 0:1.0.3-7.1.el7 libXxf86vm.x86_64 0:1.1.4-1.el7 libaio-devel.x86_64 0:0.3.109-13.el7 libbasicobjects.x86_64 0:0.1.1-29.el7
libcollection.x86_64 0:0.7.0-29.el7 libdmx.x86_64 0:1.1.3-3.el7 libevent.x86_64 0:2.0.21-4.el7 libini_config.x86_64 0:1.3.1-29.el7
libnfsidmap.x86_64 0:0.25-19.el7 libpath_utils.x86_64 0:0.2.1-29.el7 libref_array.x86_64 0:0.1.5-29.el7 libstdc++-devel.x86_64 0:4.8.5-28.el7_5.1
libverto-libevent.x86_64 0:0.2.5-4.el7 nfs-utils.x86_64 1:1.3.0-0.54.el7 psmisc.x86_64 0:22.20-15.el7 xorg-x11-utils.x86_64 0:7.5-22.el7
xorg-x11-xauth.x86_64 1:1.0.9-1.el7

Note that the limits are stored in limits.d which has priority over limits.conf:

[root@instance-20180803-1152 opc]# cat /etc/security/limits.d/oracle-database-preinstall-18c.conf
# oracle-database-preinstall-18c setting for nofile soft limit is 1024
oracle soft nofile 1024
# oracle-database-preinstall-18c setting for nofile hard limit is 65536
oracle hard nofile 65536
# oracle-database-preinstall-18c setting for nproc soft limit is 16384
# refer orabug15971421 for more info.
oracle soft nproc 16384
# oracle-database-preinstall-18c setting for nproc hard limit is 16384
oracle hard nproc 16384
# oracle-database-preinstall-18c setting for stack soft limit is 10240KB
oracle soft stack 10240
# oracle-database-preinstall-18c setting for stack hard limit is 32768KB
oracle hard stack 32768
# oracle-database-preinstall-18c setting for memlock hard limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90 % of RAM
oracle hard memlock 134217728
# oracle-database-preinstall-18c setting for memlock soft limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90% of RAM
oracle soft memlock 134217728

Note that memlock is set to 128GB here but can be higher on machines with huge RAM (up to 90% of RAM)

And for information, here is what is set in /etc/sysctl.conf:

fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500

Besides that, the preinstall rpm disables NUMA and transparent huge pages (as boot options in GRUB). It creates the oracle user (id 54321 and belonging to groups oinstall,dba,oper,backupdba,dgdba,kmdba,racdba)


Cet article Oracle 18c preinstall RPM on RedHat RHEL est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Things to know, binaries & JMS installation

Yann Neuhaus - Fri, 2018-08-03 13:55

Documentum introduced some time ago already the silent installations for its software. The way to use this changed a little bit but it seems they finally found their way. This blog will be the first of a series to present how to work with the silent installations on Documentum because it is true that it is not really well documented and most probably not much used at the moment.

We are using this where possible for our customers and it is true that it is really helpful to avoid human errors and install components more quickly. Be aware that this isn’t perfect! There are some parameters with typos, some parameters that are really not self-explanatory, so you will need some time to understand everything but, in the end, it is still helpful.

Using the silent installation is a first step but you will still need a lot of manual interventions to execute these as well as actually making your environment working. I mean it only replaces the GUI installers so everything you were doing around that is still needed (preparation of files/folders/environment, custom start/stop scripts, service setup, Java Method Server (JMS) configuration, Security Baselines, SSL setup, aso…). That’s why we also developed internally scripts or playbooks (Ansible for example) to perform everything around AND use the Documentum silent installations. In this blog and more generally in this series, I will only talk about the silent installations coming from Documentum.

Let’s start with the basis:

  1. Things you need to know
  2. Documentum Content Server installation (binaries & JMS)


1. Things you need to know
  • Each and every component installation needs its own properties file that is used by the installer to know what to install and how to do it, that’s all you need to do.
  • As I mentioned above, there are some typos in a few parameters coming from the properties files like “CONGINUE” instead of “CONTINUE”. These aren’t errors in my blogs, the parameters are really like that. All the properties files I’m showing here have been tested and validated in a lot of environments, including PROD ones in High Availability.
  • To know more about the silent installation, you can check the installation documentation. There isn’t much to read about it but still some potentially interesting information.
  • The Documentum documentation does NOT contain any description of the parameters you can/should use, that’s why I will try in each blogs to describe them as much as possible.
  • You can potentially do several things at once using a single silent properties file, the only restriction for that is that it needs to use the same installer. Therefore, you could install a docbroker/connection broker, a docbase/repository and configure/enable a licence using a single properties file but you wouldn’t be able to do the silent installation of the binaries as well because it needs another installer. That’s definitively not what I’m doing because I find it messy, I really prefer to separate things, so I know I’m using only the parameters that I need for a specific component and nothing else.
  • There are examples provided when you install Documentum. You can look at the folder “$DM_HOME/install/silent/templates” and you will see some properties file. In these files, you will usually find most of the parameters that you can use but from what I remember, there are a few missing. Be aware that some files are for Windows and some are for Linux, it’s not always the same because some parameters are specific to a certain OS:
    • linux_ files are for Linux obviously
    • win_ files are for Windows obviously
    • cfs_ files are for a CFS/Remote Content Server installation (to provide High Availability to your docbases/repositories)
  • If you look at the folder “$DM_HOME/install/silent/silenttool”, you will see that there is a utility to generate silent files based on your current installation. You need to provide a silent installation file for a Content Server and it will generate for you a CFS/Remote CS silent installation file with most of the parameters that you need. Do not 100% rely on this file, there might still be some parameters missing but present ones should be the correct ones. I will write a blog on the CFS/Remote CS as well, to provide an example.
  • You can generate silent properties file by running the Documentum installers with the following command: “<installer_name>.<sh/bin> -r <path>/<file_name>.properties”. This will write the parameters you selected/enabled/configured into the <file_name>.properties file so you can re-use it later.
  • To install an additional JMS, you can use the jmsConfig.sh script or jmsStandaloneSetup.bin for an IJMS (Independent JMS – Documentum 16.4 only). It won’t be in the blogs because I’m only showing the default one created with the binaries.
  • The following components/features can be installed using the silent mode (it is possible that I’m missing some, these are the ones I know):
    • CS binaries + JMS
    • JMS/IJMS
    • Docbroker/connection broker
    • Licences
    • Docbase/repository (CS + CFS/RCS + DMS + RKM)
    • D2
    • Thumbnail


2. Documentum Content Server installation (binaries & JMS)

Before starting, you need to have the Documentum environment variables defined ($DOCUMENTUM, $DM_HOME, $DOCUMENTUM_SHARED), that doesn’t change. Once that is done, you need to extract the installer package (below I used the package for a CS 7.3 on Linux with an Oracle DB):

[dmadmin@content_server_01 ~]$ cd /tmp/dctm_install/
[dmadmin@content_server_01 dctm_install]$ tar -xvf Content_Server_7.3_linux64_oracle.tar
[dmadmin@content_server_01 dctm_install]$
[dmadmin@content_server_01 dctm_install]$ chmod 750 serverSetup.bin
[dmadmin@content_server_01 dctm_install]$ rm Content_Server_7.3_linux64_oracle.tar


Then prepare the properties file:

[dmadmin@content_server_01 dctm_install]$ vi CS_Installation.properties
[dmadmin@content_server_01 dctm_install]$ cat CS_Installation.properties
### Silent installation response file for CS binary

### Installation parameters

### Common parameters

[dmadmin@content_server_01 dctm_install]$


A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • APPSERVER.SERVER_HTTP_PORT: The port to be used by the JMS that will be installed
  • APPSERVER.SECURE.PASSWORD: The password of the “admin” account of the JMS. Yes, you need to put all passwords in clear text in the silent installation properties files so add it just before starting the installation and remove them right after
  • COMMON.DO_NOT_RUN_DM_ROOT_TASK: Whether or not you want to run the dm_root_task in the silent installation. I usually set it to true, so it is NOT executed because the Installation Owner I’m using do not have root accesses for security reasons
  • On Windows, you would need to provide the Installation Owner’s password as well and the path you want to install Documentum on ($DOCUMENTUM). On linux, the first one isn’t needed and the second one needs to be in the environment before starting.
  • You could also potentially add more properties in this file: SERVER.LOCKBOX_FILE_NAMEx and SERVER.LOCKBOX_PASSPHRASE.PASSWORDx (where x is a number starting with 1 and incrementing in case you have several lockboxes). These parameters would be used for existing lockbox files that you would want to load. Honestly, these parameters are useless. You will anyway need to provide the lockbox information during the docbase/repository creation and you will need to specify if you want a new lockbox, an existing lockbox or no lockbox at all so specifying it here is kind of useless…


Once the properties file is ready, you can install the Documentum binaries and the JMS in silent using the following command:

[dmadmin@content_server_01 dctm_install]$ ./serverSetup.bin -f CS_Installation.properties


This conclude the first blog of this series about Documentum silent installations. Stay tuned for more soon.


Cet article Documentum – Silent Install – Things to know, binaries & JMS installation est apparu en premier sur Blog dbi services.

Authenticate proxy user from windows credentials

Tom Kyte - Fri, 2018-08-03 07:46
I am trying to work out how to connect using a proxy but passing a windows credential in - like this: SQL> CONN proxy_user[domain\windows_user]/proxy_pass So far it doesn't seem possible. Do you know how this can happen? Thanks
Categories: DBA Blogs

Upgrade Oracle Grid Infrastructure from to

Yann Neuhaus - Fri, 2018-08-03 03:26

The following blog will provide the necessary steps to upgrade the Grid Infrastructure from 12.1 to 12.2, for a Standalone Server.
One of the new features of GI 12.2 is the usage of the AFD (Oracle ASMFD Filter Driver).

Assumptions :

 You have installed Oracle GI 12.1 as grid user
 You have installed Oracle Database 12.1 as oracle user
 You have configured the groups asmadmin,asmoper,asmdba
 You installed oracle-rdbms-server-12cr2-preinstall rpm
 You patched your Oracle GI to PSU July 2017 (combo patch 25901062 to patch Oracle stack 12.1 , GI & RDBMS)
 [root]mkdir /u01/app/grid/product/12.2.0/grid/
 [root]chown -R grid:oinstall /u01/app/grid/product/12.2.0/grid/
 --stop all dbs that are using ASM
 [oracle]srvctl stop database -d ORCL

Installation : Tasks

[grid]cd /u01/app/grid/product/12.2.0/grid/
[grid]unzip /stage/linuxx64_12201_grid_home.zip
	Choose Upgrade Oracle Grid Infrastructure option.
	Confirm that all Oracle DBs using ASM are stopped.
	Check :
        Oracle base : /u01/app/grid/  
        Software Location : /u01/app/grid/product/12.2.0/grid/
	Uncheck "Automatically run configuration scripts". Is not recommanded by Oracle, but if you are doing like that 
is very possible that your upgrade process is dying without any output. 
	So at the right moment you will be asked to run rootUpgrade.sh maually.
	Click Next and validate that all the pre-requirements are confirmed.
	Monitor the progress and run the script rootUpgrade.sh when is prompted
	Once your action completed succesfully:
[grid@dbisrv04 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM
The Oracle base has been set to /u01/app/grid

[grid@dbisrv04 ~]$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is []

Migrating ASM disks from ASMlib to AFD : Tasks

Oracle ASM Filter Driver (Oracle ASMFD) simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks.


[grid@dbisrv04 ~]$ asmcmd dsget

[grid@dbisrv04 ~]$ asmcmd dsset '/dev/xvda*','ORCL:*','AFD:*'

[grid@dbisrv04 ~]$ asmcmd dsget
parameter:/dev/xvda*, ORCL:*, AFD:*


[root]export ORACLE_HOME=/u01/app/grid/product/12.2.0/grid/
[root]$GRID_HOME/bin/crsctl stop has -f


root@dbisrv04 ~]# $ORACLE_HOME/bin/asmcmd afd_configure

ASMCMD-9524: AFD configuration failed 'ERROR: ASMLib deconfiguration failed'
Cause: acfsload is running.To configure AFD oracleasm and acfsload must be stopped
Solution: stop acfsload and rerun asmcmd afd_configure

[root@dbisrv04 ~]# oracleasm exit
[root@dbisrv04 ~]# $ORACLE_HOME/bin/acfsload stop

root@dbisrv04 ~]# $ORACLE_HOME/bin/asmcmd afd_configure
AFD-627: AFD distribution files found.
AFD-634: Removing previous AFD installation.
AFD-635: Previous AFD components successfully removed.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.


[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'dbisrv04.localdomain'


[root]$ORACLE_HOME/bin/crsctl stop has


[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_refresh
[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_lsdsk
Label                     Filtering   Path
DISK01                      ENABLED   /dev/sdf1
DISK02                      ENABLED   /dev/sdg1
DISK03                      ENABLED   /dev/sdh1
DISK04                      ENABLED   /dev/sdi1
DISK05                      ENABLED   /dev/sdj1
DISK06                      ENABLED   /dev/sdk1
DISK07                      ENABLED   /dev/sdl1
DISK08                      ENABLED   /dev/sdm1
DISK09                      ENABLED   /dev/sdn1


[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_dsset '/dev/sd*'


[root]$ORACLE_HOME/bin/crsctl stop has -f
[root]$GRID_HOME/bin/asmcmd afd_scan
[root]$GRID_HOME/bin/asmcmd afd_refresh


[root@dbisrv04 ~]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_lsdsk
Label                     Filtering   Path
DISK01                      ENABLED   /dev/sdf1
DISK02                      ENABLED   /dev/sdg1
DISK03                      ENABLED   /dev/sdh1
DISK04                      ENABLED   /dev/sdi1
DISK05                      ENABLED   /dev/sdj1
DISK06                      ENABLED   /dev/sdk1
DISK07                      ENABLED   /dev/sdl1
DISK08                      ENABLED   /dev/sdm1
DISK09                      ENABLED   /dev/sdn1


select name,label,path from v$asm_disk;SQL> SQL> SQL>

NAME       LABEL                PATH
---------- -------------------- --------------------
DISK04     DISK04               AFD:DISK04
DISK03     DISK03               AFD:DISK03
DISK02     DISK02               AFD:DISK02
DISK01     DISK01               AFD:DISK01
DISK07     DISK07               AFD:DISK07
DISK05     DISK05               AFD:DISK05
DISK06     DISK06               AFD:DISK06
DISK09     DISK09               AFD:DISK09
DISK08     DISK08               AFD:DISK08

Step11: Confirm your AFD is loaded

[root@dbisrv04 ~]# /u01/app/grid/product/12.2.0/grid/bin/crsctl stat res -t
Name           Target  State        Server                   State details
Local Resources
               ONLINE  ONLINE       dbisrv04                 STABLE
               ONLINE  ONLINE       dbisrv04                 STABLE
               ONLINE  ONLINE       dbisrv04                 STABLE
               ONLINE  ONLINE       dbisrv04                 STABLE
               ONLINE  ONLINE       dbisrv04                 Started,STABLE
               OFFLINE OFFLINE      dbisrv04                 STABLE
Cluster Resources
      1        ONLINE  ONLINE       dbisrv04                 STABLE
      1        OFFLINE OFFLINE                               STABLE
      1        ONLINE  ONLINE       dbisrv04                 STABLE
      1        ONLINE  ONLINE       dbisrv04                 STABLE
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u01/app/o


Step 11b: Introduce new disks with AFD

[root]. oraenv
[root@dbisrv04 ~]# asmcmd afd_label DSIK10 /dev/sdo1 --init
ASMCMD-9521: AFD is already configured
[root@dbisrv04 ~]# asmcmd afd_label DSIK10 /dev/sdo1
[root@dbisrv04 ~]# asmcmd afd_lslbl

Step 12: Erase Oracle ASMLib

[root] yum erase oracleasm-support.x86_64
[root] yum erase oracleasmlib.x86_64

Cet article Upgrade Oracle Grid Infrastructure from to est apparu en premier sur Blog dbi services.

Fishbowl Solutions Leverages Oracle WebCenter to Create Enterprise Employee Portal Solution for National Insurance Company

An insurance company that specializes in business insurance and risk management services for select industries was struggling to provide their 2,300 employees with an employee portal system that kept users engaged and informed. They desired to provide their employees with a much more modern employee portal that leveraged the latest web technologies while making it easier for business users to contribute content. With the ability for business stakeholders to own and manage content on the site, the company believed the new portal would be updated more frequently, which would make it stickier and keep users coming back.

Business Objective

The company had old intranet infrastructure that included 28 Oracle Site Studio Sites. The process for the company’s various business units to contribute content to the site basically involved emailing Word documents to the company’s IT department. IT would then get them checked into their old WebCenter Content system that supported the SiteStudio system. Once the documents were converted to a web-viewable format, it would appear on the site. Since IT did not have a dedicated administrator for the portal, change requests typically took days and sometimes even weeks. With the company’s rapid growth, disseminating information to employees quickly and effectively became a priority. The employee portal was seen as the single place where employees could access company, department and role-specific information – on their desktop or mobile device. The company needed a new portal solution backed by strong content management capabilities to make this possible. Furthermore, Oracle Site Studio was being sunsetted, so the company needed to move off an old and unsupported system and onto a modern portal platform that had a development roadmap to support their business needs now and into the future. The company chose Oracle WebCenter Content and Portal 12c as this new system.

The company’s goals for the new employee portal were:

  • Expand what the business can do without IT involvement
  • Better engage and inform employees
  • Less manual, more dynamic portal content
  • Improve overall portal usability
  • Smart navigation – filter menus by department and role
  • Mobile access

Because of several differentiators and experience, the insurance company chose Fishbowl Solutions to help them meet their goals. The company really liked that Fishbowl offered a packaged solution that they felt would enable them to go to market faster with their new portal. Effectively, the company was looking for a portal framework that included the majority of what they needed – navigation, page templates, taskflows, etc. – that could be achieved with less coding and more configuration. This solution is called Portal Solution Accelerator.

Oracle WebCenter Paired with Fishbowl’s Portal Solution Accelerator

After working together to evaluate the problems, goals, strategy, and timeline, Fishbowl created a plan to help them create their desired Portal. Fishbowl offered software and services for rapid deployment and portal set up by user experience and content integration. Fishbowl upgraded the company’s portal from SiteStudio to Oracle WebCenter Portal and Content 12c. Fishbowl’s Portal Solution Accelerator includes portal and content bundles consisting of a collection of code, pages, assets, content and templates. PSA also offers content integration, single-page application (SPA) task flows, and built-in personalization. These foundational benefits for the portal resulted in a reduction in time-to-market, speed and performance, and developer-friendly design.


After implementing the new Portal and various changes, the content publishing time was reduced by 90 percent as the changes and updates now occur in hours instead of days or weeks, which encourages users to publish content. The new Framework allows for new portals to be created with little work from IT. Additionally, the in-place editor makes it easy for business users to edit their content and see changes in real-time. Role-based contribution and system-managed workflows streamline to content governance. The new mega-menu provided by the SPA provides faster, more intuitive navigation to intranet content. This navigation is overlaid with Google Search integration, further ensuring that users can find the content they need. Most of the components used in the intranet are reusable and easy to modify for unique cases. Therefore, the company can stay up-to-date with minimal effort. Finally, the Portal has phone, tablet, and desktop support making the intranet more accessible, ensuring repeat visits.

Overall, the national insurance company has seen an immense change in content publishing time reduction, ease of editing content, and managing and governing the portal since working with Fishbowl. The solutions that Fishbowl created and implemented helped decrease weekly internal support calls from twenty to one.

The post Fishbowl Solutions Leverages Oracle WebCenter to Create Enterprise Employee Portal Solution for National Insurance Company appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

The plan ignore my index

Tom Kyte - Thu, 2018-08-02 13:26
Good Afternoon, I have a table for the generation of a report by year and week but when I execute the query a TAF is marked. I tried to force the indexes but the execution plan ignores it. What can I do to take the index?, Is it necessary to cha...
Categories: DBA Blogs

How to grant v_$Session to a normal user, If we do not have access to sys user

Tom Kyte - Thu, 2018-08-02 13:26
How to grant v_$Session to a normal user, in a normal user we are using in a stored procedure. And we dont have access to sys user. By using select any dictionary privilege we can access but they do not want grant select any dictionary privilege to a...
Categories: DBA Blogs

PL/SQL query with NULL variables

Tom Kyte - Thu, 2018-08-02 13:26
What is the best way to handle a query with multiple variables, and some of the variables can be null, like: <code>FUNCTION GET_RECIPE(P_RECIPE_LIST IN VARCHAR2, P_OWNER_LIST IN VARCHAR2, ...
Categories: DBA Blogs

Global temporary table error

Tom Kyte - Thu, 2018-08-02 13:26
Hi AskTom, Can you please help me with this issue. Our application uses lot of global temporary table (GTT) has on commit preserve rows option. <code> CREATE GLOBAL TEMPORARY TABLE "ODR"."GTT_POINT" ( "POINT_ID" NUMBER(10,0) NOT NULL ENAB...
Categories: DBA Blogs

How to find all Mondays between two dates?

Tom Kyte - Thu, 2018-08-02 13:26
I have to find all mondays between two date range which can be parameterized or coming from two different columns of a table. Also need to generate a sql to get next 20 mondays from sysdate. can you please help me to get sql query for these 2 r...
Categories: DBA Blogs

Extended Histograms – 2

Jonathan Lewis - Thu, 2018-08-02 08:13

Following on from the previous posting which raised the idea of faking a frequency histogram for a column group (extended stats), this is just a brief demonstration of how you can do this. It’s really only a minor variation of something I’ve published before, but it shows how you can use a query to generate a set of values for the histogram and it pulls in a detail about how Oracle generates and stores column group values.

We’ll start with the same table as we had before – two columns which hold only the combinations (‘Y’, ‘N’) or (‘N’, ‘Y’) in a very skewed way, with a requirement to ensure that the optimizer provides an estimate of 1 if a user queries for (‘N’,’N’) … and I’m going to go the extra mile and create a histogram that does the same when the query is for the final possible combination of (‘Y’,’Y’).

Here’s the starting code that generates the data, and creates histograms on all the columns (I’ve run this against and so far):

rem     Script:         histogram_hack_2a.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2018
rem     Last tested 

create table t1
select 'Y' c2, 'N' c3 from all_objects where rownum <= 71482 -- > comment to deal with wordpress format issue.
union all
select 'N' c2, 'Y' c3 from all_objects where rownum <= 1994 -- > comment to deal with wordpress format issue.

variable v1 varchar2(128)

        :v1 := dbms_stats.create_extended_stats(null,'t1','(c2,c3)');

execute dbms_stats.gather_table_stats(null, 't1', method_opt=>'for all columns size 10');

In a variation from the previous version of the code I’ve used the “create_extended_stats()” function so that I can return the resulting virtual column name (also known as an “extension” name) into a variable that I can use later in an anonymous PL/SQL block.

Let’s now compare the values stored in the histogram for that column with the values generated by a function call that I first referenced a couple of years ago:

        table_name = 'T1'
and     column_name = :v1

        distinct c2, c3, 
        mod(sys_op_combined_hash(c2,c3),9999999999) endpoint_value
from t1


2 rows selected.

- - --------------
N Y     4794513072
Y N     6030031083

2 rows selected.

So we have a method of generating the values that Oracle should store in the histogram; now we need to generate 4 values and supply them to a call to dbms_stats.set_column_stats() in the right order with the frequencies we want to see:

        l_distcnt number;
        l_density number;
        l_nullcnt number;
        l_avgclen number;

        l_srec  dbms_stats.statrec;
        n_array dbms_stats.numarray;

        dbms_stats.get_column_stats (
                ownname =>null,
                tabname =>'t1',
                colname =>:v1,
                distcnt =>l_distcnt,
                density =>l_density,
                nullcnt =>l_nullcnt,
                avgclen =>l_avgclen,
                srec    =>l_srec

        l_srec.novals := dbms_stats.numarray();
        l_srec.bkvals := dbms_stats.numarray();

        for r in (
                        mod(sys_op_combined_hash(c2,c3),9999999999) hash_value, bucket_size
                from    (
                        select 'Y' c2, 'Y' c3, 1 bucket_size from dual
                        union all
                        select 'N' c2, 'N' c3, 1 from dual
                        union all
                        select 'Y' c2, 'N' c3, 71482 from dual
                        union all
                        select 'N' c2, 'Y' c3, 1994 from dual
                order by hash_value
        ) loop
                l_srec.novals(l_srec.novals.count) := r.hash_value;

                l_srec.bkvals(l_srec.bkvals.count) := r.bucket_size;
        end loop;

        n_array := l_srec.novals;

        l_distcnt  := 4;
        l_srec.epc := 4;

--      For 11g rpcnts must not be mentioned
--      For 12c is must be set to null or you
--      will (probably) raise error:
--              ORA-06533: Subscript beyond count

        l_srec.rpcnts := null;

        dbms_stats.prepare_column_values(l_srec, n_array);

                ownname =>null,
                tabname =>'t1',
                colname =>:v1,
                distcnt =>l_distcnt,
                density =>l_density,
                nullcnt =>l_nullcnt,
                avgclen =>l_avgclen,
                srec    =>l_srec


The outline of the code is simply: get_column_stats, set up a couple of arrays and simple variables, prepare_column_values, set_column_stats. The special detail that I’ve included here is that I’ve used a “union all” query to generate an ordered list of hash values (with the desired frequencies), then grown the arrays one element at a time to copy them in place. (That’s not the only option at this point, and it’s probably not the most efficient option – but it’s good enough). In the past I’ve used this type of approach but used an analytic query against the table data to produce the equivalent of 12c Top-frequency histogram in much older versions of Oracle.

A couple of important points – I’ve set the “end point count” (l_srec.epc) to match the size of the arrays, and I’ve also changed the number of distinct values to match. For 12c to tell the code that this is a frequency histogram (and not a hybrid) I’ve had to null out the “repeat counts” array (l_srec.rpcnts). If you run this on 11g the reference to rpcnts is illegal so has to be commented out.

After running this procedure, here’s what I get in user_tab_histograms for the column:

        endpoint_value                          column_value,
        endpoint_number                         endpoint_number,
        endpoint_number - nvl(prev_endpoint,0)  frequency
from    (
                lag(endpoint_number,1) over(
                        order by endpoint_number
                )                               prev_endpoint,
                table_name  = 'T1'
        and     column_name = :v1
order by endpoint_number

------------ --------------- ----------
   167789251               1          1
  4794513072            1995       1994
  6030031083           73477      71482
  8288761534           73478          1

4 rows selected.

It’s left as an exercise to the reader to check that the estimated cardinality for the predicate “c2 = ‘N’ and c3 = ‘N'” is 1 with this histogram in place.

Hitachi Content Intelligence deployment

Yann Neuhaus - Thu, 2018-08-02 07:19

Hitachi Content Intelligence (HCI) is a search and data processing solution. It allows the extraction, classification, enrichment, and categorization of data, regardless of where the data lives or what format it’s in.

Content Intelligence provides tools at large scale across multiple repositories. These tools are useful for identifying, blending, normalizing, querying, and indexing data for search, discovery, and reporting purposes.


HCI has components called data connections that it uses to access the places where your data is stored (these places are called data sources). A data connection contains all the authentication and access information that HCI needs to read the files in the data source.

HCI is extensible with published application programming interfaces (APIs) that support customized data connections, transformation stages, or building new applications.


HCI is composed of many services running on Docker.

[centos@hci ~]$ sudo docker ps -a
[sudo] password for centos:
CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS               NAMES
0547ec8761cd        com.hds.ensemble:   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           admin-app
1f22db4aec4b        com.hds.ensemble:   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           sentinel-service
fa54650ec03a        com.hds.ensemble:   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           haproxy-service
6b82daf15093        com.hds.ensemble:   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           marathon-service
a12431829a56        com.hds.ensemble:   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           mesos_master_service
812eda23e759        com.hds.ensemble:   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           mesos_agent_service
f444ab8e66ee        com.hds.ensemble:   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           zookeeper-service
c7422cdf3213        com.hds.ensemble:   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           watchdog-service

Below a representation of all services of HCI platform.


System Requirements

HCI has been qualified using these Linux distributions:

  • Fedora 24
  • Centos 7.2
  • Ubuntu 16.04 LTS
Docker requirements

HCI requires Docker software installed in each server running HCI. Docker version > 1.3.0 must be installed on all instances.

Network requirements

Each HCI instance must have a static IP address and the multiple ports must be opened for HCI tools such as Zookeeper, Mesos, Cassandra, Kafka, etc…

To see the list of port, refer to HCI official documentation. For our testing environment, we will stop the firewall service.

System configuration & Installation

HCI can run on physical or virtual servers, or hosted on public or private clouds. It is instantiated as a set of containers and provided to users as a self-service facility with support for detailed queries and ad hoc natural language searches. HCI can run in the single instance or in a cluster mode. For our blog, we will use a single instance.

Docker version:
[centos@hci ~]$ docker --version
Docker version 1.13.1, build 87f2fab/1.13.

If Docker is not installed, please follow the installation methods from the Docker official website.

Disable SELinux:
  • Backup current SELinux configuration:
[centos@hci ~]$ sudo cp /etc/selinux/config /etc/selinux/config.bak
  • Disable SELinux:
[centos@hci ~]$ sudo sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
 Create user and group:
[centos@hci ~]$ sudo groupadd hci -g 10001

[centos@hci ~]$ sudo useradd hci -u 10001 -g 1000
Disable firewall service:
[centos@hci ~]$ sudo service firewalld stop

Redirecting to /bin/systemctl stop firewalld.service
 Run Docker service:
[centos@hci ~]$ sudo systemctl status docker

* docker.service - Docker Application Container Engine

Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)

Active: active (running) since Thu 2018-08-02 10:08:38 CEST; 1s ago
Configure the Docker service to start automatically at boot:
[centos@hci ~]$ sudo systemctl enable docker

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service
Change the vm.max_map_count value:

Add ‘vm.max_map_count = 262144′ to /etc/sysctl.conf

[centos@hci ~]$ sudo vi /etc/sysctl.conf
[centos@hci ~]$ sudo sysctl -w vm.max_map_count=262144
Download HCI

Create first your Hitachi Vantara account, https://sso.hitachivantara.com/en_us/user-registration.html.

Then, from the Hitachi Vantara Community website https://community.hitachivantara.com/hci, clicks “Downloads”. You will have access to a 90-day trial license with the full feature set.

HCI Installation

Create a directory called /hci, in the location of your choice. We recommend you to use the largest disk partition:

[centos@hci ~]$ mkdir hci

Move the installation package to your hci directory:

[centos@hci ~]$ mv HCI- hci/

Extract the installation package:

[centos@hci hci]$ sudo tar –xzf HCI-

Run the installation script in the version-specific directory:

[centos@hci hci]$ sudo

Run the hci_setup script:

[centos@hci50 bin]$ sudo ./setup -i <ip-address-instance>

Run the hci_run script, and ensure that the method you use can keep the hci_run script running and can automatically restart in case of server reboot:

We recommend running the script as a service using systemd:

In the installation package, a service file is provided and you can edit this file according to your configuration:

  1. Edit the HCI.service file:
[centos@hci bin]$ vi HCI.service
  1. Ensure the ExecStart parameter is properly set, with the right path:

If not, change it to your hci installation path.

  1. Copy the HCI.service file to the appropriate location for your OS:
[centos@hci bin]$ sudo cp /hci/bin/HCI.service /etc/systemd/system
  1. Enable and start HCI service:
[centos@hci bin]$ sudo systemctl enable HCI.service

Created symlink from /etc/systemd/system/multi-user.target.wants/HCI.service to /etc/systemd/system/HCI.service.

[centos@hci bin]$ sudo systemctl start HCI.service

Check if the service has properly started:

[centos@dhcp-10-32-0-50 bin]$ sudo systemctl status HCI.service

* HCI.service - HCI

   Loaded: loaded (/etc/systemd/system/HCI.service; enabled; vendor preset: disabled)

   Active: active (running) since Thu 2018-08-02 11:15:09 CEST; 45s ago

 Main PID: 5849 (run)

    Tasks: 6

   Memory: 5.3M

   CGroup: /system.slice/HCI.service

           |-5849 /bin/bash /home/centos/hci/bin/run

           |-8578 /bin/bash /home/centos/hci/bin/run

           `-8580 /usr/bin/docker-current start -a watchdog-service

HCI Deployment

With your favorite web browser, connect to HCI administrative App:


On the Welcome page, set a password for the admin user:


Choose what you would like to deploy:

Screen Shot 2018-08-02 at 11.26.40

Click on Hitachi Content Search button and click on Continue button.

Click on Deploy Single Instance button:

Screen Shot 2018-08-02 at 11.28.07

Wait for the HCI deployment until it finishes.

Screen Shot 2018-08-02 at 12.15.50






Cet article Hitachi Content Intelligence deployment est apparu en premier sur Blog dbi services.

Partner Webcast – Building event driven microservices with Oracle Event Hub CS

If we are about to pick one word which would characterize the microservice approach, it would probably be the word freedom. It is all about freedom to change, freedom to deploy at any time, finally...

We share our skills to maximize your revenue!
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator