Feed aggregator

Recover dropped tables with Virtual Access Restore in #Exasol

The Oracle Instructor - Fri, 2019-02-01 04:34

The technique to recover only certain objects from an ordinary backup is called Virtual Access Restore. Means you create a database from backup that contains only the minimum elements needed to access the objects you request. This database is then removed afterwards.

Let’s see an example. This is my initial setup:

EXAoperation Database page

One database in a 2+1 cluster. Yes it’s tiny because it lives on my notebook in VirtualBox. See here how you can get that too.

It uses the data volume v0000 and I took a backup into the archive volume v0002 already.

EXAoperation volumes

I have a schema named RETAIL there with the table SALES:

RETAIL.SALES

By mistake, that table gets dropped:

drop table

And I’m on AUTOCOMMIT, otherwise this could be rolled back in Exasol. Virtual Access Restore to the rescue!

First I need another data volume:

second data volume

Notice the size of the new volume: It is smaller than the overall size of the backup respectively the size of the “production database”! I did that to prove that space is not much of a concern here.

Then I add a second database to the cluster that uses that volume. The connection port (8564) must be different from the port used by the first database and the DB RAM in total must not exceed the licensed size, which is limited to 4 GB RAM in my case:

second database

I did not start that database because for the restore procedure it has to be down anyway. Clicking on the DB Name and then on the Backups button gets me here:

Foreign database backups

No backup shown yet because I didn’t take any backups with exa_db2. Clicking on Show foreign database backups:

Backup choice

The Expiration date must be empty for a Virtual Access Restore, so I just remove it and click Apply. Then I select the Restore Type as Virtual Access and click Restore:

Virtual Access Restore

This will automatically start the second database:

Two databases in one cluster

I connect to exa_db2 with EXAplus, where the Schema Browser gives me the DDL for the table SALES:

ExaPlus Schema Browser get DDL

I take that to exa_db1 and run it there, which gives me the table back but empty. Next I create a connection from exa_db1 to exa_db2 and import the table

create connection exa_db2 
to '192.168.43.11..13:8564' 
user 'sys' identified by 'exasol';

import into retail.sales 
from exa at exa_db2 
table retail.sales;

This took about 2 Minutes:

Import

The second database and then the second data volume can now be dropped. Problem solved!

 

Categories: DBA Blogs

Continuing the Journey

Steven Chan - Fri, 2019-02-01 03:50

Greetings, EBS Technology Blog readers!

Speaking from my own experience as an Oracle E-Business Suite customer, this blog served as my go-to place for information regarding Oracle E-Business Suite Technology. So I understand first-hand the importance of this blog to you.

We in the EBS Technology Product Management and Development Teams are grateful to Steven for his leadership in the continuing refinement and usability of Oracle E-Business Suite, and his pioneering use of this blog to better keep in touch with you, our customers. Personally, I could not have asked for a better mentor. Wishing you all the best, Steven! And, hoping our paths cross again soon and often.

On behalf of the whole team, let me stress our continued commitment to the blog, and intention of operating with Steven's original guiding principle in mind: to bring you the information you need, when you need it. And ultimately, to keep improving how we do this.

Steven's new path has left us with some rather large shoes to fill, so going forward, this blog will bring you the distinctive individual voices of a number of highly experienced experts from our team who will be giving you their own unique insights into what we have delivered. Over the next few weeks, Kevin and I will be re-introducing you to some of our existing and frequent contributors and introducing you to new blog authors.

Our key goal is to continue to provide you, our readers and customers, the very latest news direct from Oracle E-Business Suite Development. And last but by no means least, we look forward to hearing your comments and feedback as we continue the journey of this blog.

Related Articles
Categories: APPS Blogs

Installing Spinnaker on Pivotal Container Service (PKS) with NSX-T running on vSphere

Pas Apicella - Thu, 2019-01-31 19:47
I decided to install spinnaker on my vSphere PKS installation into one of my clusters. Here is how I did this step by step

1. You will need PKS installed which I have on vSphere with PKS 1.2 using NSX-T. Here is a screen shot of that showing Ops Manager UI


Make sure your PKS Plans have these check boxes enabled, without these checked spinnaker will not install using the HELM chart we will be using below


2. In my setup I created a DataStore which will be used by my K8's cluster, this is optional you can setup PVC however you see fit.



3. Now it's assumed you have a K8s cluster which I have as shown below. I used the PKS CLI to create a small cluster of 1 master node and 3 worker nodes

$ pks cluster lemons

Name:                     lemons
Plan Name:                small
UUID:                     19318553-472d-4bb5-9783-425ce5626149
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   lemons.haas-65.pez.pivotal.io
Kubernetes Master Port:   8443
Worker Nodes:             3
Kubernetes Master IP(s):  10.y.y.y
Network Profile Name:

4. Create a Storage Class as follows, notice how we reference our vSphere Data Store named "k8s" as per step 2

$ kubectl create -f storage-class-vsphere.yaml

Note: storage-class-vsphere.yaml defined as follows

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
  datastore: k8s
  diskformat: thin
  fstype: ext3

5. Set this Storage Class as the default

$ kubectl patch storageclass fast -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Verify

papicella@papicella:~$ kubectl get storageclass
NAME             PROVISIONER                    AGE
fast (default)   kubernetes.io/vsphere-volume   14h

6. Install helm as shown below

$ kubectl create -f rbac-config.yaml
$ helm init --service-account tiller
$ kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
$ sleep 10
$ helm ls

Note: rbac-config.yaml defined as follows

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

7. Install spinnaker into your K8's cluster as follows

$ helm install --name myspinnaker stable/spinnaker --timeout 6000 --debug

If everything worked

papicella@papicella:~$ kubectl get pods
NAME                                  READY     STATUS      RESTARTS   AGE
myspinnaker-install-using-hal-gbd96   0/1       Completed   0          14m
myspinnaker-minio-5d4c999f8b-ttm7f    1/1       Running     0          14m
myspinnaker-redis-master-0            1/1       Running     0          14m
myspinnaker-spinnaker-halyard-0       1/1       Running     0          14m
spin-clouddriver-7b8cd6f964-ksksl     1/1       Running     0          12m
spin-deck-749c84fd77-j2t4h            1/1       Running     0          12m
spin-echo-5b9fd6f9fd-k62kd            1/1       Running     0          12m
spin-front50-6bfffdbbf8-v4cr4         1/1       Running     1          12m
spin-gate-6c4959fc85-lj52h            1/1       Running     0          12m
spin-igor-5f6756d8d7-zrbkw            1/1       Running     0          12m
spin-orca-5dcb7d79f7-v7cds            1/1       Running     0          12m
spin-rosco-7cb8bd4849-c44wg           1/1       Running     0          12m

8. At the end of the HELM command once complete you will see output as follows

1. You will need to create 2 port forwarding tunnels in order to access the Spinnaker UI:
  export DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward --namespace default $DECK_POD 9000

2. Visit the Spinnaker UI by opening your browser to: http://127.0.0.1:9000

To customize your Spinnaker installation. Create a shell in your Halyard pod:

  kubectl exec --namespace default -it myspinnaker-spinnaker-halyard-0 bash

For more info on using Halyard to customize your installation, visit:
  https://www.spinnaker.io/reference/halyard/

For more info on the Kubernetes integration for Spinnaker, visit:
  https://www.spinnaker.io/reference/providers/kubernetes-v2/

9. Go ahead and run these commands to connect using your localhost to the spinnaker UI

$ export DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
$ kubectl port-forward --namespace default $DECK_POD 9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000

10. Browse to http://127.0.0.1:9000



More Information

Spinnaker
https://www.spinnaker.io/

Pivotal Container Service
https://pivotal.io/platform/pivotal-container-service


Categories: Fusion Middleware

Configuration Management for Oracle Utilities

Anthony Shorten - Thu, 2019-01-31 18:45

An updated series of whitepapers are now available for managing configuration and code in Oracle Utilities products whether the implementation is on-premise, hybrid or using Oracle Utilities SaaS Utilities. It has been updated for the latest Oracle Utilities Application Framework release. The series highlights the generic tools, techniques and practices available for use in Oracle Utilities products. The series is split into a number of documents:

  • Concepts. Overview of the series and the concept of Configuration Management for Oracle Utilities products.
  • Environment Management. Establishing and managing environments for use on-premise, hybrid and on the Oracle Utilities SaaS Cloud. There are some practices and techniques discussed to reduce implementation costs.
  • Version Management. Understanding the inbuilt and third party integration for managing individual versions of individual extension assets. There is a discussion of managing code on the Oracle Utilities SaaS Cloud.
  • Release Management. Understanding the inbuilt release management capabilities for creating extension releases and accelerators.
  • Distribution. Installation advice for releasing extensions across the environments on-premise, hybrid and Oracle Utilities SaaS Cloud.
  • Change Management. A generic change management process to approve extension releases including assessment criteria.
  • Configuration Status. The information available for reporting state of extension assets.
  • Defect Management. A generic defect management process to handle defects in the product and extensions.
  • Implementing Fixes. A process and advice on implementing single fixes individually or in groups.
  • Implementing Upgrades. The common techniques and processes for implementing upgrades.
  • Preparing for the Cloud. Common techniques and assets that need to be migrated prior to moving to the Oracle Utilities SaaS Cloud.

For more information and for the whitepaper associated with these topics refer to the Configuration Management Series (Doc Id: 560401.1) available from My Oracle Support.

Introducing Fishbowl’s XML Feed Connector for Google Cloud Search

Last November, Google released Cloud Search with third-party connectivity. While not a direct replacement for the Google Search Appliance (GSA), Google Cloud Search is Google’s next-generation search platform and is an excellent option for many existing GSA customers whose appliances are nearing (or past) their expiration. Fishbowl is aiming to make the transition for GSA customers even easier with our new XML Feed Connector for Google Cloud Search.

One of the GSA’s features was the ability to index content via custom feeds using the GSA Feeds Protocol. Custom XML feed files containing content, URLs, and metadata could be pushed directly to the GSA through a feed client, and the GSA would parse and index the content in those files. The XML Feed Connector for Google Cloud Search brings this same functionality to Google’s next-generation search platform, allowing GSA customers to continue to use their existing XML feed files with Cloud Search.

Our number one priority with the XML Feed Connector was to ensure that users would be able to use the exact same XML feed files they used with the GSA, with no modifications to the files required. These XML feed files can provide content either by pointing to a URL to be crawled, or by directly providing the text, HTML, or compressed content in the XML itself. For URLs, the GSA’s built-in web crawler would retrieve the content; however, Google Cloud Search has no built-in crawling capabilities. But fear not, as our XML Feed Connector will handle URL content retrieval before sending the content to Cloud Search for indexing. It will also extract the title and metadata from any HTML page or PDF document retrieved via the provided URL, allowing the metadata to be used for relevancy, display, and filtering purposes. For content feeds using base-64 compressed content, the connector will also handle decompression and extraction of content for indexing.

In order to queue feeds for indexing, we’ve implemented the GSA’s feed client functionality, allowing feed files to be pushed to the Connector through a web port. The same scripts and web forms you used with the GSA will work here. You can configure the HTTP listener port and restrict the Connector to only accept files from certain IP addresses.

Another difference between the GSA and Google Cloud Search is how they handle metadata. The GSA would accept and index any metadata provided for an item, but Cloud Search requires you to specify and register a structured data schema that defines the metadata fields that will be accepted. There are tighter restrictions on names of metadata fields in Cloud Search, so we implemented the ability to map metadata names between those in your feed files and those uploaded to Cloud Search. For example, let’s say your XML feed file has a metadata field titled “document_title”. Cloud Search does not allow for underscores in metadata definitions, so you could register your schema with the metadata field “documenttitle”, then using the XML Feed Connector, map the XML field “document_title” to the Cloud Search field “documenttitle”.

Here is a full rundown of the supported features in the XML Feed Connector for Google Cloud Search:

  • Full, incremental, and metadata-and-url feed types
  • Grouping of records
  • Add and delete actions
  • Mapping of metadata
  • Feed content:
    • Text content
    • HTML content
    • Base 64 binary content
    • Base 64 compressed content
    • Retrieve content via URL
    • Extract HTML title and meta tags
    • Extract PDF title and metadata
  • Basic authentication to retrieve content from URLs
  • Configurable HTTP feed port
  • Configurable feed source IP restrictions

Of course, you don’t have to have used the GSA to benefit from the XML Feed Connector. As previously mentioned, Google Cloud Search does not have a built-in web crawler, and the XML Feed Connector can be given a feed file with URLs to retrieve content from and index. Feeds are especially helpful for indexing html content that cannot be traversed using a traditional web/spidering approach such as web applications, web-based content libraries, or single-page applications. If you’d like to learn more about Google Cloud Search or the XML Feed Connector, please contact us.

Fishbowl Solutions is a Google Cloud Partner and authorized Cloud Search reseller.

The post Introducing Fishbowl’s XML Feed Connector for Google Cloud Search appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

SQL Profile not used on slightly different query

Bobby Durrett's DBA Blog - Thu, 2019-01-31 15:09

Last week I was asked to help with a performance problem that looked a lot like a problem I fixed in July with a SQL Profile. The query whose plan I fixed back in July was modified by a minor application change over the weekend. A single column that was already in the select clause was added to another part of the select clause. As a result, the SQL_ID for the new query was different than the one for the July query. The SQL Profile from July associated SQL_ID 2w9nb7yvu91g0 with PLAN_HASH_VALUE 1178583502, but since the SQL_ID was now 43r1v8v6fc52q the SQL Profile was no longer used. At first, I thought I would have to redo the work I did in July to create a SQL Profile for the new query. Then I realized that the plan I used in July would work with the new SQL_ID so all I did was create a SQL Profile relating SQL_ID 43r1v8v6fc52q with PLAN_HASH_VALUE 1178583502 and the problem was solved. This is an 11.2.0.3 database running on the HP-UX Itanium platform. Here is a post from 2013 explaining how to create a SQL Profile: url. I thought it would be helpful to use this post to go over the steps that I went through with the July incident and how I originally generated the good plan. Then I wanted to make some comments about the various ways I come up with good plans for SQL Profiles by either generating a new better plan or by finding an older existing better one. Lastly, I wanted to talk about how a given good plan can be used for a variety of similar SQL statements.

The problem query that I worked on in July and many of the other SQL statements that I tune with SQL Profiles have bind variables in their where clauses. Usually the optimizer generates the plan for a query with bind variables once based on the values of the bind variables at that time. Then, unless the plan is flushed out of the shared pool, the query continues to run on the same plan even if it is horribly inefficient for other bind variable values. There is a feature that will cause the optimizer to run different plans based on the bind variable values in some cases but the SQL statements that I keep running into do not seem to use that feature. Since the query I worked on in July had bind variables I assumed that it was a typical case of a plan that worked well for one set of bind variables and that was terribly slow for another set. So, I had to find a set of bind variable values that made the query slow and figure out a better plan for those values. I used my bind2.sql script to extract the bind variable values for the problem query when I was working on the problem in July.

After extracting the bind variables, I used an AWR report to figure out which part of the plan contributed the most to the run time of the query so that I knew which bind variable value was causing the slowdown. Using an AWR report in this way only works if you do not have a bunch of slow SQL statements running at the same time. In this case the problem query 2w9nb7yvu91g0 was dominating the activity on the database with 62.19% of the total elapsed time. If there were a bunch of SQL Statements at the top of this list with similar percent of total values, it might be hard to use the AWR report to find information about this one query.

Since the activity for 2w9nb7yvu91g0 was 87.19% CPU I looked for the segments with the most logical reads. Logical reads are reads from memory, so they consume CPU and not disk I/O. In the graph below the segment for the S_ACCNT_POSTN table has 88.60% of the logical reads so most likely this segment caused the slowness of the query’s plan.

I looked at the plan for 2w9nb7yvu91g0 to see where the most heavily read table was used. This would probably be the source of the slow query performance. I found that it was doing a range scan of an index for the S_ACCNT_POSTN table that had the column POSITION_ID as its first column. This made me suspect that the plan was using the wrong index. If an index was used to retrieve many rows from the table that could take a long time. I did a count on all the rows in the table grouping by POSITION_ID and found that most rows had a specific value for that column. I replaced the actual POSITION_ID values with VALUE1, VALUE2, etc. below to hide the real values.

POSITION_ID            CNT
--------------- ----------
VALUE1             2075039
VALUE2               17671
VALUE3                8965
VALUE4                5830
VALUE5                5502
VALUE6                5070
VALUE7                4907
VALUE8                4903

Next, I verified that the query had an equal condition that related a bind variable to the POSITION_ID column of the problem table. This made me suspect that the plan in the shared pool was generated with a bind variable value for POSITION_ID other than VALUE1. So, that plan would work well for whatever value was used to create it. POSITION_ID would be equal to that value for a small percentage of the rows in the table. But, running the query in SQL*Plus with POSITION_ID=’VALUE1′ caused the optimizer to choose a plan that made sense given that this condition was true for most of the rows in the table. The PLAN_HASH_VALUE for the new plan was 1178583502.

I tested 1178583502 against a variety of possible bind variable values by using an outline hint in SQL*Plus scripts to force that plan no matter which values I tested against. I extracted the outline hint by running the query with POSITION_ID=’VALUE1′ and using this dbms_xplan call:

select * from table(dbms_xplan.display_cursor(null,null,'OUTLINE'));

Then I just added the outline hint to a copy of the same SQL*Plus script and tried various combinations of bind variable values as constants in the where clause just as I had tried VALUE1 for POSITION_ID. I used the values that I had extracted using bind2.sql. After verifying that the new plan worked with a variety of possible bind variable values, I used a SQL Profile to force 2w9nb7yvu91g0 to use 1178583502 and the problem was resolved.

I have just described how I created the original July SQL Profile by running a version of the problem query replacing the bind variables with constants that I knew would cause the original plan to run for a long time. The optimizer chose a better plan for this set of constants than the one locked into the shared pool for the original query. I used the PLAN_HASH_VALUE for this plan to create a SQL Profile for the July query. This is like an approach that I documented in two earlier blog posts. In 2014 I talked about using a hint to get a faster plan in memory so I could use it in a SQL Profile. In 2017 I suggested using an outline hint in the same way. In both of those cases I ran the problem query with hints and verified that it was faster with the hints. Then I used a SQL Profile to force the better PLAN_HASH_VALUE onto the problem query. So, in all these cases the key is to generate a better plan in any way possible so that it is in memory and then create a SQL Profile based on it. A lot of times we have queries that have run on a better plan in the past and we just apply a SQL Profile that forces the better plan that is already in the system. My December, 2018 post documents this type of situation. But the 2014 and 2017 blog posts that I mentioned above and the July 2018 example that I just described all are similar in that we had to come up with a new plan that the query had never used and then force it onto the SQL statement using a SQL Profile.

The incidents in January and July and the cases where I added hints all lead me to wonder how different one SQL statement can be from another and still share the same plan. The problem last week showed that two queries with slightly different select clauses could still use the same plan. The other cases show that you can add hints or run the statement with bind variables replaced with constants. In the January case I did not have to go back through the analysis that I did in July because I could quickly force the existing plan from the July query onto the January one. The January problem also shows the limits of SQL Profiles. The slightest change to a SQL statement causes a SQL Profile to be ignored, even though the plan would still work for the new SQL statement. But in the January case the ability to use the same plan for slightly different queries made it easy to create a new SQL Profile.

Bobby

Categories: DBA Blogs

A New Chapter, Redux

Steven Chan - Thu, 2019-01-31 11:54

The new team is in place now, so it's time to bow out.

My 21 years at Oracle have been more-fulfilling than I would have ever imagined. All of you – my mentors, colleagues, staff, and faithful readers – have helped me grow professionally and personally. Some of you have done even more, by contributing to my life in deep and profound ways. You have my eternal gratitude.

Life is long and the world small. I hope that our paths will cross again.

Categories: APPS Blogs

Descending Problem

Jonathan Lewis - Thu, 2019-01-31 09:34

I’ve written in the past about oddities with descending indexes ( here, here, and here, for example) but I’ve just come across a case where I may have to introduce a descending index that really shouldn’t need to exist. As so often happens it’s at the boundary where two Oracle features collide. I have a table that handles data for a large number of customers, who record a reasonable number of transactions per year, and I have a query that displays the most recent transactions for a customer. Conveniently the table is partitioned by hash on the customer ID, and I have an index that starts with the customer_id and transaction_date columns. So here’s my query or, to be a little more accurate, the client’s query – simplified and camouflaged:


select  /*+ gather_plan_statistics */
        *
from    (
             select
                    v1.*,
                    rownum rn
             from   (
                             select   /*
                                         no_eliminate_oby
                                         index_rs_desc(t1 (customer_id, transaction_date))
                                      */
                                      t1.*
                             from     t1
                             where    customer_id = 50
                             and      transaction_date >= to_date('1900-01-01','yyyy-mm-dd')
                             order by transaction_date DESC
                ) v1
                where  rownum <= 10 -- > comment to avoid WordPress format issue
         )
where    rn >= 1
;

You’ll notice some hinting – the /*+ gather_plan_statistics */ will allow me to report the rowsource execution stats when I pull the plan from memory, and the hints in the inline view (which I’ve commented out in the above) will force a particular execution plan – walking through the index on (company_id, transaction_date) in descending order.

If I create t1 as a simple (non-partitioned) heap table I get the following plan unhinted (I’ve had to edit a “less than or equal to” symbol to avoid a WordPress format issue):

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |       |      1 |        |    14 (100)|     10 |00:00:00.01 |      14 |
|*  1 |  VIEW                           |       |      1 |     10 |    14   (0)|     10 |00:00:00.01 |      14 |
|*  2 |   COUNT STOPKEY                 |       |      1 |        |            |     10 |00:00:00.01 |      14 |
|   3 |    VIEW                         |       |      1 |     10 |    14   (0)|     10 |00:00:00.01 |      14 |
|   4 |     TABLE ACCESS BY INDEX ROWID | T1    |      1 |    340 |    14   (0)|     10 |00:00:00.01 |      14 |
|*  5 |      INDEX RANGE SCAN DESCENDING| T1_I1 |      1 |     10 |     3   (0)|     10 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=1)
   2 - filter(ROWNUM .LE. 10)
   5 - access("CUSTOMER_ID"=50 AND "TRANSACTION_DATE" IS NOT NULL AND "TRANSACTION_DATE">=TO_DATE('
              1900-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


Notice the descending range scan of the index – just as I wanted it – the minimal number of buffer visits, and only 10 rows (and rowids) examined from the table. But what happens if I recreate t1 as a hash-partitioned table with local index – here’s the new plan, again without hinting the SQL:


----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                      | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                               |       |      1 |        |   207 (100)|     10 |00:00:00.01 |     138 |       |       |          |
|*  1 |  VIEW                                          |       |      1 |     10 |   207   (1)|     10 |00:00:00.01 |     138 |       |       |          |
|*  2 |   COUNT STOPKEY                                |       |      1 |        |            |     10 |00:00:00.01 |     138 |       |       |          |
|   3 |    VIEW                                        |       |      1 |    340 |   207   (1)|     10 |00:00:00.01 |     138 |       |       |          |
|*  4 |     SORT ORDER BY STOPKEY                      |       |      1 |    340 |   207   (1)|     10 |00:00:00.01 |     138 |  2048 |  2048 | 2048  (0)|
|   5 |      PARTITION HASH SINGLE                     |       |      1 |    340 |   206   (0)|    340 |00:00:00.01 |     138 |       |       |          |
|   6 |       TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1    |      1 |    340 |   206   (0)|    340 |00:00:00.01 |     138 |       |       |          |
|*  7 |        INDEX RANGE SCAN                        | T1_I1 |      1 |    340 |     4   (0)|    340 |00:00:00.01 |       3 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=1)
   2 - filter(ROWNUM. LE. 10)
   4 - filter(ROWNUM .LE. 10)
   7 - access("CUSTOMER_ID"=50 AND "TRANSACTION_DATE">=TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "TRANSACTION_DATE" IS NOT NULL)

Even though the optimizer has recognised that is will be visiting a single partition through a local index it has not chosen a descending index range scan, though it has used the appropriate index; so it’s fetched all the relevant rows from the table in the wrong order then sorted them discarding all but the top 10. We’ve done 138 buffer visits (which would turn into disk I/Os, and far more of them, in the production system).

Does this mean that the optimizer can’t use the descending index when the table is partitioned – or that somehow the costing has gone wrong. Here’s plan with the hints in place to see what happens when we demand a descending range scan:


----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |      1 |        |   207 (100)|     10 |00:00:00.01 |       8 |
|*  1 |  VIEW                                 |       |      1 |     10 |   207   (1)|     10 |00:00:00.01 |       8 |
|*  2 |   COUNT STOPKEY                       |       |      1 |        |            |     10 |00:00:00.01 |       8 |
|   3 |    VIEW                               |       |      1 |    340 |   207   (1)|     10 |00:00:00.01 |       8 |
|   4 |     PARTITION HASH SINGLE             |       |      1 |    340 |   206   (0)|     10 |00:00:00.01 |       8 |
|   5 |      TABLE ACCESS BY LOCAL INDEX ROWID| T1    |      1 |    340 |   206   (0)|     10 |00:00:00.01 |       8 |
|*  6 |       INDEX RANGE SCAN DESCENDING     | T1_I1 |      1 |    340 |     4   (0)|     16 |00:00:00.01 |       3 |
----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=1)
   2 - filter(ROWNUM .LE. 10)
   6 - access("CUSTOMER_ID"=50 AND "TRANSACTION_DATE" IS NOT NULL AND "TRANSACTION_DATE">=TO_DATE('
              1900-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

The optimizer is happy to oblige with the descending range scan – we can see that we’ve visited only 8 buffers, and fetched only 10 rows from the table. The cost, however, hasn’t made any allowance for the limited range scan. Check back to the plan for the simple (non-partitioned) table and you’ll see that the optimizer did allow for the reduced range scan. So the problem here is a costing one – we have to hint the index range scan if we want Oracle limit the work it does.

You might notice, by the way that the number of rowids returned in the index range scan descending operation is 16 rather than 10 – a little variation that didn’t show up when the table wasn’t partitioned. I don’t know why this happened, but when I changed the requirement to 20 rows the range scan returned 31 rowids, when I changed it to 34 rows the range scan returned 46 rows, and a request for 47 rows returned 61 index rowids – you can see the pattern, the number of rowids returned by the index range scan seems to be 1 + 15*N.

Footnote:

If you want to avoid hinting the code (or adding an SQL patch) you need only re-create the index with the transaction_date column declared as descending (“desc”), at which point the optimizer automatically chooses the correct strategy and the run-time engine returns exactly 10 rowids and doesn’t need to do any sorting. But who wants to create a descending index when they don’t really need it !

If you want to reproduce the experiments, here’s the script to create my test data.


rem
rem     Script:         pt_ind_desc_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2018
rem     Purpose:        
rem
rem     Last tested 
rem             18.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem

create table t1 (
        customer_id,
        transaction_date,
        small_vc,
        padding 
)
partition by hash(customer_id) partitions 4
nologging
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        mod(rownum,128)                         customer_id,
        (trunc(sysdate) - 1e6) + rownum         transaction_date,
        lpad(rownum,10,'0')                     v1,
        lpad('x',100,'x')                       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

create index t1_i1 on t1(customer_id, transaction_date) 
local 
nologging
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );
end;
/

I’ve run this test on 12.1.0.2, 12.2.0.1, and 18.3.0.0 – the behaviour is the same in all three versions.

Italian Oracle User Group Tech Days 2019

Yann Neuhaus - Wed, 2019-01-30 15:28

The Italian Oracle User Group (ITOUG) is an independent group of Oracle enthusiasts and experts which work together as volunteers to promote technical knowledge sharing in Italy.

Here the ITOUG Board members:
ITOUG Board

This year ITOUG Tech Days take place in Milan on 30th January and in Rome on 1st February. Two different streams for each event:
– Database
– Analytics and Big Data
Today I participated to the event in Milan.
But before talking about that, ITOUG Tech Days started with the speakers’ dinner on Tuesday evening in Milan: aperitif, good Italian food and very nice people.
ITOUG Speakers Dinner

On Wednesday morning, we all met at Oracle Italia in Cinisello Balsamo (MI):
ITOUG Milan

After the welcome message by some ITOUG Board members:
ITOUG Welcome  Msg
sessions finally started. I attended the following ones of the Database stream:

- “Instrumentation 2.0: Performance is a feature” by Lasse Jenssen from Norway
Lasse
We have to understand what’s going on into a system, performance is a feature and we need instrumentation. Oracle End-to-End metrics, new tags in 12c, v$sql_monitor, dbms_monitor… And work in progress for instrumentation 3.0 with ElasticSearch, LogStash and Kibana.

- “Hash Join Memory Optimization” by one of the ITOUG Board member, Donatello Settembrino
Donatello
How Hash Join works and how to improve PGA consumption performances. Examples of partitioning (to exclude useless data), (full) Partitioning Wise Join (to use less resources) and parallelism. Differences between Right-Deep Join Trees and Left-Deep Join Trees, and concept of Bushy Join Trees in 12R2.

- “Design your databases using Oracle SQL Developer Data Modeler” by Heli Helskyaho from Finland
Heli
Oracle SQL Developer Data Modeler with SQL Developer or in a standalone mode to design your database. It uses Subversion integrated in the tool for the version control and management. It also has support for other databases, MySQL for example. And it’s free.

- “Bringing your Oracle Database alive with APEX” by Dimitri Gielis from Belgium
Dimitri
Two things to learn from this session:
1) Use Oracle Application Express to design and develop a web application.
2) And Quick SQL to create database objects and build a data model
And all that in a very fast way.

- “Blockchain beyond the Hype” by one of the ITOUG Board member, Paolo Gaggia
Paolo
The evolution of blockchain from bitcoin to new Enterprise-Oriented implementation and some interesting use cases.

Every session was very interesting: thanks to the great and amazing speakers (experts working on Oracle technologies, Oracle ACE, Oracle ACE Director…) for their sharing.

Follow the Italian Oracle User Group on Twitter (IT_OUG) and see you at the next ITOUG event!

Cet article Italian Oracle User Group Tech Days 2019 est apparu en premier sur Blog dbi services.

Review: eBook: Saas for Dummies

Dietrich Schroff - Wed, 2019-01-30 14:38
As i wrote some days ago, Oracle provided this book as free download:


Here my review:

The book starts with the chapter "recognizing modern business challenges and opportunities". The author comes up with things like "business transformation starts with digital transformation" or "businesses will need to rethink every element of their business". It contains a typical opening for this topic.

The next chapter has the title "Empowering Business Managers with SaaS Solutions". The focus here lies on how to use the cloud for your sales process. One short citation: "Robust  sales  cloud  solutions  are  so  much more  than  contact  lists,  schedules,  and sources of prospects." Another focus are the advantages for HR professionals to finding and retaining top talents with the help of SaaS. For further information the following link is provided:  www.oracle.com/goto/mit-tech-review

At page 35 of 68 chapter 3 starts. "Transforming you Business with SaaS". There are given some helpful links like "www.oracle.com/cloud/saas-scalability.html" or "www.oracle.com/cloudtco" and some others.

Chapter 4 ("Choosing Your Path to the Cloud") talks about building hybrid
environments and extending apps with social, mobile and process capabilites. I think without some additional links, this is much to short.

By far the best chapter is chapter 5: "Ten (or so) Important Questions to Ask you SaaS Provider". I will not reveal these questions (and answers) on this blog - but this is really helpful. Nevertheless the answers are sometime too short.

From page 57 up to 61 a glossary is provided.

My conclusion on this book: This is a really good quickstart into SaaS but not a deep dive into this topic. Thanks to Oracle to provide it as free download.

Slides From January 2019 AZORA Meeting

Bobby Durrett's DBA Blog - Wed, 2019-01-30 10:20

Here are slides from the January 2019 AZORA Meeting with Charles Kim and Viscosity:

Oracle Database 12.2 New Features

Oracle Database 18c New Features

Thank you to Charles and Viscosity for making our meeting a big success, and thanks for sharing these slides.

Bobby

Categories: DBA Blogs

Recover a corrupted datafile in your DataGuard environment 11G/12C.

Yann Neuhaus - Wed, 2019-01-30 07:30

On a DG environment, a datafile needs to be recovered on the STANDBY site, in two situations : when is deleted or corrupted.
Below, I will explain  how to recover a corrupted datafile, in order to be able to repair the Standby database, without to be necessary to restore entire database.

Initial situation :

DGMGRL> connect /
Connected to "PROD_SITE2"
Connected as SYSDG.
DGMGRL> show configuration;

Configuration - CONFIG1

  Protection Mode: MaxPerformance
  Members:
  PROD_SITE2 - Primary database
    PROD_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 15 seconds ago)

On this  environment, we have a table called EMP with 100 rows, owned by the user TEST (default tablespace TEST).

SQL> set linesize 220;
SQL> select username,default_tablespace from dba_users where username='TEST';

USERNAME     DEFAULT_TABLESPACE
-------------------------------
TEST         TEST

SQL> select count(*) from test.emp;

  COUNT(*)
----------
       100

By mistake, the datafile on Standby site, get corrupted.

SQL> alter database open read only;
alter database open read only
*
ORA-01578: ORACLE data block corrupted (file # 5, block # 3)
ORA-01110: data file 5: '/u02/oradata/PROD/test.dbf'

As is corrupted, the apply of the redo log is stopped until will be repaired. So the new inserts into the EMP table will not be applied:

SQL> begin
  2  for i in 101..150 loop
  3  insert into test.emp values (i);
  4  end loop;
  5  END;
  6  /

PL/SQL procedure successfully completed.

SQL> COMMIT;

Commit complete.

SQL> select count(*) from test.emp;

  COUNT(*)
----------
       150

SQL> select name,db_unique_name,database_role from v$database;

NAME      DB_UNIQUE_NAME                 DATABASE_ROLE
--------- ------------------------------ ----------------
PROD      PROD_SITE2                     PRIMARY

To repair it, we will use PRIMARY site to backup controlfile and the related datafile.

oracle@dbisrv03:/home/oracle/ [PROD] rman target /

connected to target database: PROD (DBID=410572245)

RMAN> backup current controlfile for standby format '/u02/backupctrl.ctl';


RMAN> backup datafile 5 format '/u02/testbkp.dbf';

Starting backup at 29-JAN-2019 10:59:37
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=276 device type=DISK

We will transfer the backuppieces on the STANDBY server, using scp:

 scp backupctrl.ctl oracle@dbisrv04:/u02/
 scp testbkp.dbf oracle@dbisrv04:/u02/

Now, will start the restore/recover on the STANDBY server :

SQL> startup nomount
ORACLE instance started.

Total System Global Area 1895825408 bytes
Fixed Size                  8622048 bytes
Variable Size             570425376 bytes
Database Buffers         1308622848 bytes
Redo Buffers                8155136 bytes
SQL> exit
oracle@dbisrv04:/u02/oradata/PROD/ [PROD] rman target /


Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PROD (not mounted)

RMAN> restore controlfile from '/u02/backupctrl.ctl'; 
.........
RMAN> alter database mount;


RMAN> catalog start with '/u02/testbkp.dbf';

searching for all files that match the pattern /u02/testbkp.dbf

List of Files Unknown to the Database
=====================================
File Name: /u02/testbkp.dbf

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u02/testbkp.dbf




RMAN> restore datafile 5;

Starting restore at 29-JAN-2019 11:06:31
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /u02/oradata/PROD/test.dbf
channel ORA_DISK_1: reading from backup piece /u02/testbkp.dbf
channel ORA_DISK_1: piece handle=/u02/testbkp.dbf tag=TAG20190129T105938
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 29-JAN-2019 11:06:33

RMAN> exit

Now, we will start to apply the logs again and try to resync the STANDBY database.
!!! Here you need to stop recovery process if you do not have a dataguard active license.

SQL> recover managed standby database using current logfile disconnect from session;
Media recovery complete.
SQL> recover managed standby database cancel;
SQL> alter database open read only;

Database altered.

SQL> select count(*) from test.emp;

  COUNT(*)
----------
       150

Now, we can see the last insert activity on the PRIMARY site that is available on the STANDBY site.

On 12c environment, with an existing container PDB1, the things are easier, with the feature RESTORE/RECOVER from service :

connect on the standby site
rman target /
restore tablespace PDB1:USERS from service PROD_PRIMARY;
recover tablespace PDB1:USERS;

Cet article Recover a corrupted datafile in your DataGuard environment 11G/12C. est apparu en premier sur Blog dbi services.

Expertise in Your Ear: Top 10 Groundbreakers Podcasts for 2018

OTN TechBlog - Wed, 2019-01-30 07:13

On March 27, 2009 the Arch2Arch Podcast released its first program. Over nearly ten years, more than 360 programs, and several name changes, what is now known as the Oracle Groundbreakers Podcast enters 2019 as the most downloaded Oracle podcast, a status it has maintained since early 2015. The credit for that accomplishment is shared by an incredible roster of panelists who have taken the time to lend their voices and share their insight and expertise, and by a faithful and growing audience that keeps listening year after year.

The list below reflects the top ten most-downloaded Groundbreakers Podcast programs for the past year. Big thanks and congratulations to the panelists for putting these programs on top! (Note that several people appeared in multiple programs.)

Also note that IT careers, like IT itself, are constantly evolving. The titles and jobs listed for the panelists are those they held at the time the particular podcast was recorded.

  Program Panelists 1 Chatbots: First Steps and Lessons Learned - Part 1

Released 09/19/2017

Chabot development comes with a unique set of requirements and considerations that may prove challenging to those making their first excursion into this new breed of services. This podcast features a panel of developers who have been there, done that, and are willing to talk about it.

2 Blockchain: Beyond Bitcoin

Released December 20, 2017

Blockchain has emerged from under crypto-currency’s shadow to become a powerful trend in enterprise IT -- and something that should be on every developer's radar. This program assembles a panel of Blockchain experts to discuss the technology's impact, examine use cases, and offer suggestions for developers who want to learn more in order to take advantage of the opportunities blockchain represents.

  • Lonneke Dikmans (Oracle ACE Director, Groundbreaker Ambassador, Head of Center of Excellence, eProseed)
  • John King (Senior Principal Enablement Specialist, Oracle)
  • Robert van Mölken (Oracle ACE, Groundbreaker Ambassador, Senior Integration / Cloud Specialist, AMIS)
  • Arturo Viveros (Oracle ACE, Grounbreaker Ambassador, Principal Architect, Sysco AS)
3 DevOps in the Real World: Culture, Tools, Adoption

Released February 21, 2018

Is the heat behind DevOps driving adoption? Are organizations on the adoption path making headway in the cultural and technological changes necessary for DevOps success? A panel of DevOps experts discusses these and other issues in this wide-ranging conversation.

4 Jfokus Panel: Building a New World Out of Bits

Released January 17, 2018

In this freewheeling conversation a panel of JFokus 2018 speakers discusses the trends and technologies that have captured their interest, the work that consumes most of their time, and the issues that concern them as IT professionals.

  • Jesse Anderson (Data Engineer, Creative Engineer, Managing Director, Big Data Institute)
  • Benjamin Cabé (IoT Program Manager, Evangelist, Eclipse Foundation)
  • Kevlin Henney (Consultant, programmer, speaker, trainer, writer, owner, Curbralan)
  • Siren Hofvander (Chief Security Officer, Min Doktor
  • Dan Bergh Johnsson (Agile aficionado, Domain Driven Design enthusiast, code quality craftsman, Omegapoint)
5 On Microservice Implementation and Design

Released October 17, 2018

Microservices are a hot topic. But that's exactly the wrong reason to dive into designing and implementing microservices. Before you do that, check out what this panel of experts has to say about what makes microservices a wise choice.

  • Sven Bernhardt (Oracle ACE; Solution Architect, OPITZ Consulting)
  • Lucas Jellema (Oracle ACE Director; Groundbreaker Ambassador; CTO, Consulting IT Architect, AMIS Services
  • Chris Richardson (Java Champion, Founder, Eventuate, Inc.)
  • Luis Weir (Oracle ACE Director; Groundbreaker Ambassador; CTO, Oracle Practice, Capgemini)
6 Women in Technology: Motivation and Momentum

Released February 6, 2018

Community leaders share insight on what motivated them in their IT careers and how they lend their expertise and energy in driving momentum in the effort to draw more women into technology.

  • Natalie Delemar (Senior Consultant, Ernst and Young; President, ODTUG Board of Directors)
  • Heli Helskyaho (Oracle ACE Director; CEO, Miracle Finland; Ambassador, EMEA Oracle Usergroups Community)
  • Michelle Malcher (Oracle ACE Director; Security Architect, Extreme Scale Solutions)
  • Kellyn Pot'Vin-Gorman (Technical Intelligence Manager, Office of CTO, Delphix; President, Board Of Directors, Denver SQL Server User Group)
  • Laura Ramsey (Manager, Database Technology and Developer Communities, Oracle)
7 What's Hot? Tech Trends That Made a Real Difference in 2017

Released November 15, 2017

Forget the hype! Which technologies made a genuine difference in the work of software developers in 2017? We gathered five highly respected developers in a tiny hotel room in San Francisco, tossed in a couple of microphones, and let the conversation happen.

  • Lonneke Dikmans (Oracle ACE Director, Groundbreaker Ambassador, Chief Product Officer, eProseed)
  • Lucas Jellema (Oracle ACE Director, Groundbreaker Ambassador, Chief Technical Officer, AMIS Services)
  • Frank Munz (Oracle ACE Director, Software Architect, Cloud Evangelist, Munz & More)
  • Pratik Patel (Java Champion, Chief Technical Officer, Triplingo, President, Atlanta Java Users Group)
  • Chris Richardson (Java Champion, Founder, Chief Executive Officer, Eventuate Inc.)
8 Developer Evolution: What's Rockin’ Roles in IT?

Released August 15, 2018

Powerful forces are driving change in long-established IT roles. This podcast examines the trends and technologies behind this evolution, and looks at roles that may emerge in the future.

  • Rolando Carrasco (Oracle ACE, Groundbreaker Ambassador, Co-owner, Principal SOA Architect, S&P Solutions)
  • Martin Giffy D'Souza (Oracle ACE Director, Director of Innovation, Insum Solutions)
  • Mark Rittman (Oracle ACE Director, Chief Executive Officer, MJR Analytics)
  • Phil Wilkins (Oracle ACE, Senior Consultant, Capgemini)
9 Chatbots: First Steps and Lessons Learned - Part 2

Released October 18, 2017

This podcast continues the discussion of chatbot development with an entirely new panel of developers who also had the opportunity to work with that same Oracle Intelligent Bots beta release.

  • Mia Urman (Oracle ACE Director; Chief Executive Officer, AuraPlayer Limited)
  • Peter Crew (Director, SDS Group; Chief Technical Officer, MagiaCX Solutions)
  • Christoph Ruepprich (Oracle ACE; Infrastructure Senior Principal, Accenture Enkitec Group)
10 Beyond Chatbots: An AI Odyssey

Released April 18, 2018

This program looks beyond chatbots to explore artificial intelligence -- its current capabilities, staggering potential, and the challenges along the way.

 

Get Involved

Most Oracle Groundbreakers Podcast programs are the result of suggestions by community members. If there is a topic you'd like to have discussed on the program, post a comment here, or contact me (@ArchBeatDev / email). You can even serve as a panelist or a guest host/producer!

Subscribe Never miss an episode! The Oracle Groundbreakers Podcast is available via:

EMEA A&C Partners Cloud Security Workshop

  EMEA A&C Partners Cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Announcement: “Oracle Indexing Internals and Best Practices Seminar” – Berlin 8-9 May 2019 !!

Richard Foote - Wed, 2019-01-30 05:00
I’m very excited to announce I’ll be running my highly acclaimed “Oracle Indexing Internals and Best Practices” seminar in conjunction with the German Oracle User Group (DOAG) in Berlin, Germany on 8-9 May 2019. The venue will be the DOAG Offices in Berlin (DOAG Office, 6th Floor, Tempelhofer Weg 64, 12347 Berlin). Because of venue […]
Categories: DBA Blogs

Oracle Cloud Security & Identity Cloud Service (IDCS): Step By Step Lab Activity Guide for Beginners

Online Apps DBA - Wed, 2019-01-30 02:57

You learn Cloud Security when you do the hands-on. So visit: https://k21academy.com/idcs05 & follow the guides that you must perform in order to learn and become expert in Oracle Cloud Security & Identity Cloud Service. You learn Cloud Security when you do the hands-on. So visit: https://k21academy.com/idcs05 & follow the guides that you must perform […]

The post Oracle Cloud Security & Identity Cloud Service (IDCS): Step By Step Lab Activity Guide for Beginners appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

PostgreSQL 12: Detach postmaster process from pg_ctl’s session at server startup

Yann Neuhaus - Wed, 2019-01-30 00:37

Recently a commit landed in the PostgreSQL development tree that made me aware of something I did not know so far: When you start PostgreSQL with a script using pg_ctl and that script was cancelled the postmaster was killed as well before PostgreSQL 12. Sounds weird? Lets do a little demo.

The little demo script is quite simple: Print the version of pg_ctl, startup PostgreSQL and then sleep for 10 seconds:

postgres@pgbox:/home/postgres/ [PG10] cat start.sh 
#!/bin/bash
/u01/app/postgres/product/10/db_3/bin/pg_ctl --version
/u01/app/postgres/product/10/db_3/bin/pg_ctl -D /u02/pgdata/10/PG103 start
sleep 10

When you execute that against PostgreSQL before version 12 and then CRTL-C the script while it is sleeping the postmaster will be killed as well:

postgres@pgbox:/home/postgres/ [PG10] ./start.sh 
pg_ctl (PostgreSQL) 10.5
waiting for server to start....2019-01-25 13:00:07.258 CET - 1 - 6853 -  - @ LOG:  listening on IPv4 address "0.0.0.0", port 5432
2019-01-25 13:00:07.259 CET - 2 - 6853 -  - @ LOG:  listening on IPv6 address "::", port 5432
2019-01-25 13:00:07.263 CET - 3 - 6853 -  - @ LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2019-01-25 13:00:07.282 CET - 4 - 6853 -  - @ LOG:  redirecting log output to logging collector process
2019-01-25 13:00:07.282 CET - 5 - 6853 -  - @ HINT:  Future log output will appear in directory "pg_log".
 done
server started
^C
postgres@pgbox:/home/postgres/ [PG10] ps -ef | grep postgres

Starting with PostgreSQL 12 this will not happen anymore:

postgres@pgbox:/home/postgres/ [PG10] cat start.sh 
#!/bin/bash
/u01/app/postgres/product/DEV/db_1/bin/pg_ctl --version
/u01/app/postgres/product/DEV/db_1/bin/pg_ctl -D /u02/pgdata/DEV start
sleep 10
postgres@pgbox:/home/postgres/ [PGDEV] ./start.sh 
pg_ctl (PostgreSQL) 12devel
waiting for server to start....2019-01-25 13:02:51.690 CET - 1 - 9408 -  - @ LOG:  listening on IPv6 address "::1", port 5433
2019-01-25 13:02:51.690 CET - 2 - 9408 -  - @ LOG:  listening on IPv4 address "127.0.0.1", port 5433
2019-01-25 13:02:51.700 CET - 3 - 9408 -  - @ LOG:  listening on Unix socket "/tmp/.s.PGSQL.5433"
2019-01-25 13:02:51.734 CET - 4 - 9408 -  - @ LOG:  redirecting log output to logging collector process
2019-01-25 13:02:51.734 CET - 5 - 9408 -  - @ HINT:  Future log output will appear in directory "pg_log".
 done
server started
^C
postgres@pgbox:/home/postgres/ [PGDEV] ps -ef | grep postgres
postgres  9408     1  0 13:02 ?        00:00:00 /u01/app/postgres/product/DEV/db_1/bin/postgres -D /u02/pgdata/DEV
postgres  9409  9408  0 13:02 ?        00:00:00 postgres: logger   
postgres  9411  9408  0 13:02 ?        00:00:00 postgres: checkpointer   
postgres  9412  9408  0 13:02 ?        00:00:00 postgres: background writer   
postgres  9413  9408  0 13:02 ?        00:00:00 postgres: walwriter   
postgres  9414  9408  0 13:02 ?        00:00:00 postgres: autovacuum launcher   
postgres  9415  9408  0 13:02 ?        00:00:00 postgres: stats collector   
postgres  9416  9408  0 13:02 ?        00:00:00 postgres: logical replication launcher   
postgres  9422 29009  0 13:03 pts/2    00:00:00 ps -ef
postgres  9423 29009  0 13:03 pts/2    00:00:00 grep --color=auto postgres
root     29005  2910  0 12:53 ?        00:00:00 sshd: postgres [priv]
postgres 29008 29005  0 12:53 ?        00:00:01 sshd: postgres@pts/2
postgres 29009 29008  0 12:53 pts/2    00:00:00 -bash
root     29463  2910  0 12:54 ?        00:00:00 sshd: postgres [priv]
postgres 29466 29463  0 12:54 ?        00:00:00 sshd: postgres@pts/1
postgres 29467 29466  0 12:54 pts/1    00:00:00 -bash

Learned something new again.

Cet article PostgreSQL 12: Detach postmaster process from pg_ctl’s session at server startup est apparu en premier sur Blog dbi services.

Announement: “Oracle Indexing Internals and Best Practices Seminar” – Berlin 8-9 May 2019 !!

Richard Foote - Tue, 2019-01-29 23:04
I’m very excited to announce I’ll be running my highly acclaimed “Oracle Indexing Internals and Best Practices” seminar in conjunction with the German Oracle User Group (DOAG) in Berlin, Germany on 8-9 May 2019. The venue will be the DOAG Offices in Berlin (DOAG Office, 6th Floor, Tempelhofer Weg 64, 12347 Berlin). Because of venue […]
Categories: DBA Blogs

Automate your PDMWorks, EPDM, and File System to PTC Windchill Data Migrations Using Fishbowl’s LinkLoader

Companies don’t invest in Windchill so they can execute lengthy, complicated, expensive data migration projects. They invest in Windchill so they can realize the benefits of a well-executed PLM strategy. However, the lynchpin of a good Windchill deployment is getting accurate data into Windchill…and the faster it happens, the faster a company can move on their PLM strategy.

Fishbowl Solutions has worked with numerous customers on enterprise data migration projects over the years. When many companies first deploy Windchill their migration projects are dealing with CAD data and documents. Manually loading CAD data is far too time consuming. As mentioned, these activities are vital to success, but why waste valuable engineering resources that can be better utilized on engineering work.

Fishbowl Solutions has the LinkLoader family of apps that automate the loading of files/data into Windchill PDMLink. When it comes to CAD data and documents, the files might be on the network file system (NFS) or from 3rd party PDM, such as EPDM or PDMWorks. In fact, migrating Solidworks into Windchill PDMLink is probably the busiest I have ever seen.

Please note that Fishbowl can do a lot more than just migrate Solidworks from the file system, but hopefully this gives a little detail to some types of projects.

To read the rest of this blog post, please visit the PTC LiveWorx 2019 blog.

The post Automate your PDMWorks, EPDM, and File System to PTC Windchill Data Migrations Using Fishbowl’s LinkLoader appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

How To Deploy Office Web Apps Server 2013

Yann Neuhaus - Tue, 2019-01-29 10:30
The 4 Steps Of Office Web Apps Server 2013 Installation

Office Web Apps provides browser-based versions of Excel, One Note, Word and PowerPoint. It also helps users who access files through SharePoint 2013.

The objective of this topic is to define the steps to install office web apps 2013, create the farm and the binding so that it can be used within SharePoint 2013 test environment.

For this example, we have the following systems in place:

  • Windows Server 2012 r2
  • SharePoint Server 2013
1) Install Server roles, features & Role services

Server roles:

  • Web server

Features:

  • Ink and Handwriting services

Role services:

  • Dynamic Content Compression
  • Windows Authentication
  • .Net Extensibility 4.5
  • ASP.Net 4.5
  • Server Side Includes

Restart the server.

Note that if your installation is done on Windows Server 2016, the feature “Ink and Handwriting services” is now a default part of the server and no longer requires a separate package.

2) Install Office Web Apps

Launch the setup from the DVD file and wait until the installation is finished.

3) Create Office Web Apps Farm

1) Specify the internal URL for the server name
2) Use administrative privileges
3) run the Power Shell command “New-OfficeWebAppsFarm -InternalURL http://servername -AllowHttp -EditingEnabled”

This command allows HTTP as it is internal and the function enable editing to allow users to edit documents.

To verify that the farm is successfully created, type in the browser the URL “http://servername/hosting/delivery”.

4) Bind Office Web Apps and SharePoint

The communication between both sides still need to be done through HTTP protocol.

1) Use administrative privileges
2) Switch over SharePoint management shell
3) Run the command “New-SPWOPIBinding -ServerName servername -AllowHTTP”

The command should return that HTTP protocol is used internally and a list of bindings.

Check SharePoint default internal zone:

Get-SPWOPIZone

If it is HTTPS, change it into HTTP:

Set-SPWOPIZone -Zone internal-http

Set the authentication OAuth over HTTP to true:

  • $config = (Get-SPSecurityTokenServiceConfig)
  • $config.AllowOAuthOverHttp = $true
  • $config.update()

SharePoint can now use Office Web Apps.

To avoid errors, few points need to be verify before testing Office Web apps within SharePoint:

a) Check SharePoint authentication mode (claims-based and not classic) using PowerShell:

  • $WebApp=”http://webapp/”
  • (Get-SPWebApplication $WebAppURL).UseClaimsAuthentication

b) Check that the login account is not a system account but a testing account.

c) Enabling editing Office Web Apps, if it is false, set it to true using the PowerShell command:

  • Set-OfficeWebAppsFarm -EditingEnabled:$true

d) Check that Office Web Apps has enough memory

Need help, more details can be found on here.

Cet article How To Deploy Office Web Apps Server 2013 est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator