Feed aggregator

How to get unique values/blanks across all columns

Tom Kyte - Sat, 2019-10-19 15:45
Hi, I have a wide table with 200 odd columns. Requirement is to pivot the columns and display the unique values and count of blanks within each column <code>CREATE TABLE example( c1 VARCHAR(10), c2 VARCHAR(10), c3 VARCHAR(10) ); / INSERT ...
Categories: DBA Blogs

Elastic search using Oracle 19c

Tom Kyte - Sat, 2019-10-19 15:45
Team, Very recently we got a question from our customer that "can I replace Elastic search using Oracle 19c or any version of Oracle database prior to that"? Any inputs/directions to that please - kindly advice.
Categories: DBA Blogs

Add NULLABLE column with DEFAULT values to a large table

Tom Kyte - Sat, 2019-10-19 15:45
I'm adding a number of columns with DEFAULT values that are NULLABLE to a large table e.g <code> alter table big_table add (col1 varchar2(1) default 0, col2 varchar2(1) default 0); </code> It's taking a long time to do because Oracle is e...
Categories: DBA Blogs

Create table from select with changes in the column values

Tom Kyte - Sat, 2019-10-19 15:45
Hello, In the work we have an update script that takes around 20 hours, and some of the most demanding queries are updates where we change some values, something like: <code> UPDATE table1 SET column1 = DECODE(table1.column1,null,null,'no...
Categories: DBA Blogs

Data archival and purging for OLTP database.

Tom Kyte - Sat, 2019-10-19 15:45
Hi Tom, Need your suggestion regarding data archival and purging solution for OLTP db. Currently we are planning to have below approach. database is size is 150 Gb and planning to run the jobs monthly. 1) Generate flat files from table based on...
Categories: DBA Blogs

Job to end in case connection not establishing with utl_http.begin_request

Tom Kyte - Sat, 2019-10-19 15:45
i am tracking around 17000 orders through a web service through pl/sql to a destination server. i am running multiple jobs in batches(for 500 orders each job) for invoking webservice to get the order status. so around 34 jobs are running (17000/500) ...
Categories: DBA Blogs

Inserting values into a table with '&'

Tom Kyte - Sat, 2019-10-19 15:45
Hi, I want to insert a values into a table as follows: create table test (name varchar2(35)); insert into test values ('&Vivek'); I tried the escape character '\' but the system asks for a value of the substitution variable. I also did a...
Categories: DBA Blogs

SELECT ANY DICTIONARY - What Privileges Does it Have - SELECT_CATALOG_ROLE

Pete Finnigan - Sat, 2019-10-19 15:45
There has been a few blog posts over the years discussing what is the difference between SELECT ANY DICTIONARY and the SELECT_CATALOG_ROLE. Hemant posted in 2014 about the difference between SELECT ANY DICTIONARY and SELECT_CATALOG_ROLE . This post was a....[Read More]

Posted by Pete On 11/10/19 At 01:59 PM

Categories: Security Blogs

What Privileges Can you Grant On PL/SQL?

Pete Finnigan - Sat, 2019-10-19 15:45
Oracle has a lot of privileges and models; privileges can be granted to users, roles and also since 12c roles can be granted to PL/SQL code (I will not discuss this aspect here as i will bog separately about grants....[Read More]

Posted by Pete On 08/10/19 At 01:43 PM

Categories: Security Blogs

ORA-01950 Error on a Sequence - Error on Primary Key Index

Pete Finnigan - Sat, 2019-10-19 15:45
I posted yesterday a blog about an error on a sequence of ORA-01950 on tablespace USERS - ORA-01950 Error on a Sequence . This was attributed to the sequence by me because that's where the error in Oracle was pointing....[Read More]

Posted by Pete On 01/10/19 At 01:12 PM

Categories: Security Blogs

ORA-01950 Error on a Sequence

Pete Finnigan - Sat, 2019-10-19 15:45
UPDATE: I have updated information for this post and rather than make this one much longer i created a new post - please see ORA-01950 Error on a Sequence - Error on Primary Key Index Wow, its been a while....[Read More]

Posted by Pete On 30/09/19 At 01:42 PM

Categories: Security Blogs

Machine Learning with SQL

Andrejus Baranovski - Sat, 2019-10-19 10:25
Python (and soon JavaScript with TensorFlow.js) is a dominant language for Machine Learning. What about SQL? There is a way to build/run Machine Learning models in SQL. There could be a benefit to run model training close to the database, where data stays. With SQL we can leverage strong data analysis out of the box and run algorithms without fetching data to the outside world (which could be an expensive operation in terms of performance, especially with large datasets). This post is to describe how to do Machine Learning in the database with SQL.



Read more in my Towards Data Science post.

Basic Replication -- 8 : REFRESH_MODE ON COMMIT

Hemant K Chitale - Sat, 2019-10-19 09:26
So far, in previous posts in this series, I have demonstrated Materialized Views that set to REFRESH ON DEMAND.

You can also define a Materialized View that is set to REFRESH ON COMMIT -- i.e. every time DML against the Source Table is committed, the MV is also immediately updated.  Such an MV must be in the same database  (you cannot define an ON COMMIT Refresh across two databases  -- to do so, you have to build your own replication code, possibly using Database Triggers or external methods of 2-phase commit).

Here is a quick demonstration, starting with a Source Table in the HEMANT schema and then building a FAST REFRESH MV in the HR schema.

SQL> show user
USER is "HEMANT"
SQL> create table hemant_source_tbl (id_col number not null primary key, data_col varchar2(30));

Table created.

SQL> grant select on hemant_source_tbl to hr;

Grant succeeded.

SQL> create materialized view log on hemant_source_tbl;

Materialized view log created.

SQL> grant select on mlog$_hemant_source_tbl to hr;

Grant succeeded.

SQL>
SQL> grant create materialized view to hr;

Grant succeeded.

SQL> grant on commit refresh on hemant_source_tbl to hr;

Grant succeeded.

SQL>
SQL> grant on commit refresh on mlog$_hemant_source_tbl to hr;

Grant succeeded.

SQL>


Note : I had to grant the CREATE MATERIALIZED VIEW privilege to HR for this test case. Also, as the MV is to Refresh ON COMMIT, two additional object-level grants on the Source Table and the Materialized View Log are required as the Refresh is across schemas.

SQL> connect hr/HR@orclpdb1
Connected.
SQL> create materialized view hr_mv_on_commit
2 refresh fast on commit
3 as select id_col as primary_key_col, data_col as value_column
4 from hemant.hemant_source_tbl;

Materialized view created.

SQL>


Now that the Materialized View is created successfully, I will test DML against the table and check that an explicit REFRESH call (e.g. DBMS_MVIEW.REFRESH or DBMS_REFRESH.REFRESH) is not required.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> insert into hemant_source_tbl values (1,'First');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from hr.hr_mv_on_commit;

PRIMARY_KEY_COL VALUE_COLUMN
--------------- ------------------------------
1 First

SQL> connect hr/HR@orclpdb1
Connected.
SQL> select * from hr_mv_on_commit;

PRIMARY_KEY_COL VALUE_COLUMN
--------------- ------------------------------
1 First

SQL>


The Materialized View in the HR schema was refreshed immediately, without an explicit REFRESH call.

Remember : An MV that is to REFRESH ON COMMIT must be in the same database as the Source Table.




Categories: DBA Blogs

pgconf.eu – Welcome to the community

Yann Neuhaus - Fri, 2019-10-18 15:41

On tuesday I started my journey to Milan to attend my first pgconf.eu, which was also my first big conference. I was really excited what will come up to me. How will it be, to become a visible part of the community. How will it be, to give my first presentation in front of so many people?

The conference started with the welcome and opening session. It took place in a huge room, to give all of the participants a seat. Really amazing, how big this community is and it is still growing. So many people from all over the world (Japan, USA, Chile, Canada….) attending this conference.

And suddenly I realized, this is the room, where I have to give my session. Some really strange feelings came up. This is my first presentation at a conference, this is the main stage, there is space for so many people! And I really hoped, they will make it smaller for me. But there was something else: Anticipation.

But first I want to give you some impressions from my time at the pgconf. Amazing to talk to one of the main developers of Patroni. I was really nervous when I just went to him and said: “Hi, may I ask you a question?” For sure he didn’t say NO. Even all the other ladies and gentlemen I met (the list is quite long), they all are so nice and all of them really open minded (is this because they all work with an open source database?). And of course a special thanks to Pavel Golub for the great picture. Find it in Daniel’s blog
Beside meeting all that great people, I enjoyed some really informative and cool sessions.


Although I still hoped they are going to make the room smaller for my presentation, of course they didn’t do it. So I had only one chance:

And I did it and afterwards I book it under “good experience”. A huge room is not so much different than a small one.

As I am back home now, I want to say: Thanks pgconf.eu and dbi services for giving me this opportunity and thanks to the community for this warm welcome.

Cet article pgconf.eu – Welcome to the community est apparu en premier sur Blog dbi services.

CBO Oddities – 1

Jonathan Lewis - Fri, 2019-10-18 12:10

I’ve decided to do a little rewriting and collating so that I can catalogue related ideas in an order that makes for a better narrative. So this is the first in a series of notes designed to help you understand why the optimizer has made a particular choice and why that choice is (from your perspective) a bad one, and what you can do either to help the optimizer find a better plan, or subvert the optimizer and force a better plan.

If you’re wondering why I choose to differentiate between “help the optimizer” and “subvert the optimizer” consider the following examples.

  • A query is joining two tables in the wrong order with a hash join when you know that a nested loop join in the opposite order would far better because you know that the data you want is very nicely clustered and there’s a really good index that would make access to that data very efficient. You check the table preferences and discover that the table_cached_blocks preference (see end notes) is at its default value of 1, so you set it to 16 and gather fresh stats on the indexes on the table. Oracle now recognises the effectiveness of this index and changes plan accordingly.
  • The optimizer has done a surprising transformation of a query, aggregating a table before joining to a couple of other tables when you were expecting it to use the joins to eliminate a huge fraction of the data before aggregating it.  After a little investigation you find that setting hidden parameter _optimizer_distinct_placement to false stops this happening.

You may find the distinction unnecessarily fussy, but I’d call the first example “helping the optimzier” – it gives the optimizer some truthful information about your data that is potentially going to result in better decisions in many different statements – and the second example “subverting the optimizer” – you’ve brute-forced it into not taking a path you didn’t like but at the same time you may have stopped that feature from appearing in other ways or in other queries. Of course, you might have minimised the impact of setting the parameter by using the opt_param() hint to apply the restriction to just this one query, nevertheless it’s possible that there is a better plan for the query that would have used the feature at some other point in the query if you’d managed to do something to help the optimizer rather than constraining it.

What’s up with the Optimizer

It’s likely that most of the articles will be based around interpreting execution plans since those are the things that tell us what the optimizer thinks will happen when it executes a statement, and within execution plans there are three critical aspects to consider –

  1. the numbers (most particularly Cost and Rows),
  2. the shape of the plan,
  3. the Predicate Information.

I want to use this note to make a couple of points about just the first of the three.

  • First – the estimates on any one line of an execution plan are “per start” of the line; some lines of an execution plan will be called many times in the course of a statement. In many cases the Rows estimate from one line of a plan will dictate the number of times that some other line of the plan will be executed – so a bad estimate of “how much data” can double up as a bad estimate of “how many times”, leading to a plan that looks efficient on paper but does far too much work at run-time. A line in a plan that looks a little inefficient may be fine if it executes only one, a line that looks very efficient may be a disaster if it executes a million time. Being able to read a plan and spot the places where the optimizer has produced a poor estimate of Rows is a critical skill – and there are many reasons why the optimizer produces poor estimates. Being able to spot poor estimates depends fairly heavily on knowing the data, but if you know the generic reasons for the optimizer producing poor estimates you’ve got a head start for recognising and addressing the errors when they appear.
  • Second – Cost is synonymous with Time. For a given instance at a given moment there is a simple, linear, relationship between the figure that the optimizer reports for the Cost of a statement (or subsection of a statement) and the Time that the optimizer reports. For many systems (those that have not run the calibrate_io procedure) the Time is simply the Cost multiplied by the time the optimizer thinks it will take to satisfy a single block read request, and the Cost is the optimizer’s estimate of the I/O requirement to satisfy the statement – with a fudge factor introduced to recognise the fact that a “single block” read request ought to complete in less time than a “multiblock” read request. Generally speaking the optimizer will consider many possible plans for a statement and pick the plan with the lowest estimated cost – but there is at least one exception to this rule, and it is an unfortunate weakness in the optimizer that there are many valid reasons why its estimates of Cost/Time are poor. Of course, you will note that the values that Oracle reports for the Time column are only accurate to the second – which isn’t particularly helpful when a single block read typically operates in the range of a few milliseconds.

To a large degree the optimizer’s task boils down to:

  • What’s the volume and scatter of the data I need
  • What access paths, with what wastage, are available to get to that data
  • How much time will I spend on I/O reading (and possibly discarding) data to extract the bit I want

Of course there are other considerations like the amount of CPU needed for a sort, the potential for I/O as sorts or hash joins, the time to handle a round-trip to a remote system, and RAC variations on the basic theme. But for many statements the driving issue is that any bad estimates of “how much data” and “how much (real) I/O” will lead to bad, potentially catastrophic, choices of execution plan. In the next article I’ll list all the different reasons (that I can think of at the time) why the optimizer can produce bad estimates of volume and time.

References for Cost vs. Time

References for table_cached_blocks:

 

OAC v105.4: Understanding Map Data Quality

Rittman Mead Consulting - Fri, 2019-10-18 08:58
 Understanding Map Data Quality

Last week Oracle Analytics Cloud v105.4 was announced. One of the features particularly interested me since it reminded the story of an Italian couple willing to spend their honeymoon in the Australian Sydney and ending up in the same Sydney city but in Nova Scotia for a travel agency error. For the funny people out there: don't worry, it wasn't me!

The feature is "Maps ambiguous location matches" and I wanted to write a bit about it.

#OracleAnalytics 105.4 update just about to go live and deploy on your environments. Check-out some of the new features coming. Here is a first list of selected items: https://t.co/Megqz5ekcx. Stay tuned with whole #OAC team (@OracleAnalytics,@BenjaminArnulf...) for more examples pic.twitter.com/CWpj8rC1Bf

— Philippe Lions (@philippe_lions) October 8, 2019

Btw OAC 105.4  includes a good set of new features like a unified Home page, the possibility to customize any DV font and more options for security and on-premises connections amongst others. For a full list of new features check out the related Oracle blog or videos.

Maps: a bit of History

Let's start with a bit of history. Maps have been around in OBIEE first and OAC later since a long time, in the earlier stages of my career I spent quite a lot of time writing HTML and Javascript to include map visualizations within OBIEE 10g. The basic tool was called Mapviewer and the knowledge & time required to create a custom clickable or drillable map was....huge!

With the raise of OBIEE 11g and 12c the Mapping capability became easier, a new "Map" visualization type was included in the Answers and all we had to do was to match the geographical reference coming from one of our Subject Areas (e.g. Country Name) with the related column containing the shape information (e.g. the Country Shape).

 Understanding Map Data Quality

After doing so, we were able to plot our geographical information properly: adding multiple layers, drilling capabilities and tooltips was just a matter of few clicks.

 Understanding Map Data QualityThe Secret Source: Good Maps and Data Quality

Perfect, you might think, we can easily use maps everywhere as soon as we have any type of geo-location data available in our dataset! Well, the reality in the old days wasn't like that, Oracle at the time provided some sample maps with a certain level of granularity and covering only some countries in detail. What if we wanted to display all the suburbs of Verona? Sadly that wasn't included so we were forced to either find a free source online or to purchase it from a Vendor.

The source of map shapes was only half of the problem to solve: we always need to create a join with a column coming from our Subject Area! Should we use the Zip Code? What about the Address? Is City name enough? The deeper we were going into the mapping details the more problems were arising.

A common problem (as we saw before about Sydney) was using the City name. How many cities are called the same? How many regions? Is the street name correct? Data quality was and still is crucial to provide accurate data and not only a nice but useless map view.

OAC and the Automatic Mapping Capability

Within OAC, DV offers the Automatic Mapping Capability, we only need to include in a Project a column containing a geographical reference (lat/long, country name etc), select "Map" as visualization type and the tool will choose the most appropriate mapping granularity that matches our dataset.

 Understanding Map Data Quality

Great! This solves all our issues! Well... not all of them! The Automatic Mapping capability doesn't have all the possible maps in it, but we can always include new custom maps using the OAC Console if we need them.

 Understanding Map Data QualitySo What's New in 105.4?

All the above was available way before the latest OAC release. The 105.4 adds the "Maps ambiguous location matches" feature, which means that every time we create a Map View, OAC will provide us with a Location Matches option

 Understanding Map Data Quality

If we click this option OAC will provide as a simple window where we can see:

  • How many locations matched
  • How many locations have issues
  • What's the type of Issue?
 Understanding Map Data Quality

The type of issue can be one between:

  • No Match in case OAC doesn't find any comparable geographical value
  • Multiple Matches  when there are multiple possible associations
  • Partial Matches when there is a match only to part of the content
 Understanding Map Data Quality

We can then take this useful information and start a process of data cleaning to raise the quality of our data visualization.

Conclusion

Maps were and are a really important visualization available in OAC. The Maps ambiguous location matches feature provides a way to understand if our visualization is representative of our dataset. So, if you want to avoid spending your honeymoon in the wrong Sydney or if you just want to provide accurate maps on top of your dataset, use this feature available in OAC!

Categories: BI & Warehousing

In Defence of Best Practices

Tim Hall - Fri, 2019-10-18 03:38

The subject of “Best Practices” came up again yesterday in a thread on Twitter. This is a subject that rears its head every so often.

I understand all the arguments against the term “Best Practices”. There isn’t one correct way to do things. If there were it would be the only way, or automatic etc. It’s all situational etc. I really do understand all that. I’ve been in this conversation so many times over the years you wouldn’t believe it. I’ve heard all the various sentences and terms people would prefer to use rather than “Best Practice”, but here’s my answer to all that.

“Best practices are fine. Get over yourself and shut up!”

Tim Hall : 18th October 2019

I’ve said this more politely in many other conversations, including endless email chains etc.

When it comes down to it, people need guidance. A good best practice will give some context to suggest it is a starting point, and will give people directions for further information/investigation, but it’s targeted at people who don’t know enough about what they are doing and need help. Without a best practice they will do something really bad, and when shit happens they will blame the product. A good best practice can be the start of a journey for people.

I agree that the “Always do this because ‘just bloody do it!'” style of best practice is bad, but we all know that…

I just find the whole conversation so elitist. I spend half of my life Googling solutions (mostly non-Oracle stuff) and reading best practices and some of them are really good. Some of them have definitely improved my understanding, and left me in a position where I have a working production system that would otherwise not be working.

I’m sure this post will get a lot of reactions where people try and “explain to me” why I am wrong, and what I’m not understanding about the problems with best practices. As mentioned before, I really do know all that and I think you are wrong, and so do the vast majority of people outside your elitist echo chamber. Want to test that? Try these…

  • Write a post called “Best Practices for {insert subject of your choice}”. It will get more hits than anything else you’ve ever written.
  • Submit a conference session called “Best Practices for {insert subject of your choice}”. Assuming it gets through the paper selection, you will have more bums on seats than you’ve ever had before for that same subject.

Rather than wasting your life arguing about how flawed the term “Best Practices” is, why don’t you just write some good best practices? Show the world how they should be done, and start people on a positive journey. It’s just a term. Seriously. Get over yourself!

Cheers

Tim…

PS. I hope people from yesterday’s tweets don’t think this is directed at them. It’s really not. It’s the subject matter! This really is a subject I’ve revisited so many times over the years…

Updates

Due to repeatedly having to explain myself, here come some points people have raised and my reactions. I’m sure this list will grow as people insist on “educating me” about why I’m wrong.

I prefer “standard” or “normal” to “best”. As I said at the start of the post, I’ve heard just about every potential variation of this, and I just don’t care. They are all the same thing. They are all best practices. It’s just words. Yes, I know what “best” means, but that’s irrelevant. This is a commonly used term in tech and you aren’t getting rid of it, so own it!

I’ve seen people weaponize best practices. OK. So are you saying they won’t weaponize “standard practices” or “normal practices”? They won’t ever say, “So are you telling me you went against normal practices?”. Of course they will. Stupid people/companies will do stupid things regardless of the name.

But it’s not the “best”! Did you even read my post? I’m so tired of this. It’s a best practice to never use hints in SQL. I think that’s pretty solid advice. I do use hints in some SQL, but I always include a comment to explain why. I have deviated from best practice, but documented the reason why. If a person/company wants no deviation from best practice, they can remove it and have shit performance. That’s their choice. I’ve been transparent and explained my deviation. If this is not the way you work, you are wrong, not the best practice.

Most vendor best practice documents are crap. I have some sympathy for this, but I raise tickets against bad documentation, including best practices, and generally the reception to these has been good. The last one was a couple of weeks ago and the company (not Oracle) changed the docs the same day. I always recommend raising an SR/ticket/bug against bad documentation. It doesn’t take much time and you are improving things for yourself and everyone else. I feel like you can’t complain about the quality of the docs if you never point out the faults.

In Defence of Best Practices was first posted on October 18, 2019 at 9:38 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Getting started with Pivotal Telemetry Collector

Pas Apicella - Thu, 2019-10-17 18:44
Pivotal Telemetry Collector is an automated tool that collects data from a series of Pivotal Cloud Foundry (PCF) APIs found within a foundation and securely sends that data to Pivotal. The tool collects:

  • Configuration data from the Ops Manager API.
  • Optional certificate data from the CredHub API.
  • Optional app, task and service instance usage data from the Usage Service API.

Pivotal uses this information to do the following:

  • Improve its products and services.
  • Fix problems.
  • Advise customers on how best to deploy and use Pivotal products.
  • Provide better customer support.
Steps to Run

1. Download the scripts required to run "Pivotal Telemetry Collector" using this URL from Pivotal Network

https://network.pivotal.io/products/pivotal-telemetry-collector/

2. Extract to file system. You will notice 3 executables use the right one for your OS, in my case it was the Mac OSX executable "telemetry-collector-darwin-amd64"

-rwxr-xr-x   1 papicella  staff  14877449  5 Oct 00:42 telemetry-collector-linux-amd64*
-rwxr-xr-x   1 papicella  staff  14771312  5 Oct 00:42 telemetry-collector-darwin-amd64*
-rwxr-xr-x   1 papicella  staff  14447104  5 Oct 00:42 telemetry-collector-windows-amd64.exe*

3. Make sure you have network access to your PCF env. You will need to hit the Operations Manager URL as well as the CF CLI API and usage service API endpoints as shown below

Ops Manager endpoint

$ ping opsmgr-02.haas-yyy.pez.pivotal.io
PING opsmgr-02.haas-yyy.pez.pivotal.io (10.195.1.1): 56 data bytes
64 bytes from 10.195.1.1: icmp_seq=0 ttl=58 time=338.412 ms

CF API endpoint

$ ping api.system.run.haas-yyy.pez.pivotal.io
PING api.system.run.haas-yyy.pez.pivotal.io (10.195.1.2): 56 data bytes
64 bytes from 10.195.1.2: icmp_seq=0 ttl=58 time=380.852 ms

Usage Service API endpoint

$ ping app-usage.system.run.haas-yyy.pez.pivotal.io
PING app-usage.system.run.haas-yyy.pez.pivotal.io (10.195.1.3): 56 data bytes
64 bytes from 10.195.1.3: icmp_seq=0 ttl=58 time=495.996 ms

4. Now you can use this via two options. As you would of guessed we are using the CLI given we have downloaded the scripts.

Concourse: https://docs.pivotal.io/telemetry/1-1/using-concourse.html
CLI: https://docs.pivotal.io/telemetry/1-1/using-cli.html

5. So to run out first collect we would run the collector script as follows. More information about what the CLI options are can be found on this link or using help option "./telemetry-collector-darwin-amd64 --help"

https://docs.pivotal.io/telemetry/1-1/using-cli.html

Script Name: run-with-usage.sh

$ ./telemetry-collector-darwin-amd64 collect --url https://opsmgr-02.haas-yyy.pez.pivotal.io/ --username admin --password {PASSWD} --env-type production --output-dir output --usage-service-url https://app-usage.system.run.haas-yyy.pez.pivotal.io/ --usage-service-client-id push_usage_service --usage-service-client-secret {PUSH-USAGE-SERVICE-PASSWORD} --usage-service-insecure-skip-tls-verify --insecure-skip-tls-verify --cf-api-url https://api.system.run.haas-yyy.pez.pivotal.io

Note: You would obtain the PUSH-USAGE-SERVICE-PASSWORD from Ops Manager PAS tile credentials tab as shown in screen shot below


6. All set let's try it out

$ ./run-with-usage.sh
Collecting data from Operations Manager at https://opsmgr-02.haas-yyy.pez.pivotal.io/
Collecting data from Usage Service at https://app-usage.system.run.haas-yyy.pez.pivotal.io/
Wrote output to output/FoundationDetails_1571355194.tar
Success!

7. Let's extract the output TAR as follows

$ cd output/
$ tar -xvf FoundationDetails_1571355194.tar
x opsmanager/ops_manager_deployed_products
x opsmanager/pivotal-container-service_resources
x opsmanager/pivotal-container-service_properties
x opsmanager/pivotal-mysql_resources
x opsmanager/pivotal-mysql_properties
x opsmanager/cf_resources
x opsmanager/cf_properties
x opsmanager/p-compliance-scanner_resources
x opsmanager/p-compliance-scanner_properties
x opsmanager/ops_manager_vm_types
x opsmanager/ops_manager_diagnostic_report
x opsmanager/ops_manager_installations
x opsmanager/ops_manager_certificates
x opsmanager/ops_manager_certificate_authorities
x opsmanager/metadata
x usage_service/app_usage
x usage_service/service_usage
x usage_service/task_usage
x usage_service/metadata

7. Now let's view the output which is a SET of JSON files and to do that I simply use "cat" command and pipe that to JQ as shown below

$ cat ./output/opsmanager/ops_manager_installations | jq -r
{
  "installations": [
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "p-compliance-scanner-a53448be03a372a13d89",
          "identifier": "p-compliance-scanner",
          "label": "Compliance Scanner for PCF",
          "product_version": "1.0.0"
        }
      ],
      "deletions": [],
      "finished_at": "2019-08-30T09:38:29.679Z",
      "id": 25,
      "started_at": "2019-08-30T09:21:44.810Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [
        {
          "change_type": "deletion",
          "deployment_status": "pending",
          "guid": "p-compliance-scanner-1905a6707e4f434e315a",
          "identifier": "p-compliance-scanner",
          "label": "Compliance Scanner for PCF",
          "product_version": "1.0.0-beta.25"
        }
      ],
      "finished_at": "2019-08-08T02:10:51.130Z",
      "id": 24,
      "started_at": "2019-08-08T02:09:10.290Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-07-18T12:27:54.301Z",
      "id": 23,
      "started_at": "2019-07-18T11:31:19.781Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "successful",
          "guid": "cf-3095a0a264aa5900d79f",
          "identifier": "cf",
          "label": "Small Footprint PAS",
          "product_version": "2.5.3"
        }
      ]
    },
    {
      "additions": [],
      "deletions": [
        {
          "change_type": "deletion",
          "deployment_status": "pending",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "finished_at": "2019-07-07T00:16:31.948Z",
      "id": 22,
      "started_at": "2019-07-07T00:04:32.974Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-07-07T00:02:12.003Z",
      "id": 21,
      "started_at": "2019-07-06T23:57:06.401Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ]
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "p-compliance-scanner-1905a6707e4f434e315a",
          "identifier": "p-compliance-scanner",
          "label": "Compliance Scanner for PCF",
          "product_version": "1.0.0-beta.25"
        }
      ],
      "deletions": [],
      "finished_at": "2019-06-10T09:23:19.595Z",
      "id": 20,
      "started_at": "2019-06-10T09:10:44.431Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "skipped",
          "guid": "aquasec-1b94477ae275ee81be58",
          "identifier": "aquasec",
          "label": "Aqua Security for PCF",
          "product_version": "1.0.0"
        }
      ],
      "deletions": [],
      "finished_at": "2019-06-06T17:38:18.396Z",
      "id": 19,
      "started_at": "2019-06-06T17:35:34.614Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "skipped",
          "guid": "aquasec-1b94477ae275ee81be58",
          "identifier": "aquasec",
          "label": "Aqua Security for PCF",
          "product_version": "1.0.0"
        }
      ],
      "deletions": [],
      "finished_at": "2019-06-06T17:33:18.545Z",
      "id": 18,
      "started_at": "2019-06-06T17:21:41.529Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T11:15:43.546Z",
      "id": 17,
      "started_at": "2019-06-04T10:49:57.969Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T10:44:04.018Z",
      "id": 16,
      "started_at": "2019-06-04T10:17:28.230Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T09:52:30.782Z",
      "id": 15,
      "started_at": "2019-06-04T09:48:45.867Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T09:21:17.245Z",
      "id": 14,
      "started_at": "2019-06-04T09:17:45.360Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T08:50:33.333Z",
      "id": 13,
      "started_at": "2019-06-04T08:47:09.790Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T08:32:44.772Z",
      "id": 12,
      "started_at": "2019-06-04T08:23:27.386Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T08:16:41.757Z",
      "id": 11,
      "started_at": "2019-06-04T08:13:54.645Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T01:53:50.594Z",
      "id": 10,
      "started_at": "2019-06-04T01:43:56.205Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ]
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T01:28:22.975Z",
      "id": 9,
      "started_at": "2019-06-04T01:24:52.587Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-03T08:37:25.961Z",
      "id": 8,
      "started_at": "2019-06-03T08:13:07.511Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ]
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "deletions": [],
      "finished_at": "2019-06-03T04:57:06.897Z",
      "id": 7,
      "started_at": "2019-06-03T03:52:13.705Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "pivotal-mysql-0e5d717f1c87c8095c9d",
          "identifier": "pivotal-mysql",
          "label": "MySQL for Pivotal Cloud Foundry v2",
          "product_version": "2.5.4-build.51"
        }
      ],
      "deletions": [],
      "finished_at": "2019-05-22T05:15:55.703Z",
      "id": 6,
      "started_at": "2019-05-22T04:09:49.841Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "cf-3095a0a264aa5900d79f",
          "identifier": "cf",
          "label": "Small Footprint PAS",
          "product_version": "2.5.3"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-05-22T02:12:22.934Z",
      "id": 5,
      "started_at": "2019-05-22T01:45:28.101Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "cf-3095a0a264aa5900d79f",
          "identifier": "cf",
          "label": "Small Footprint PAS",
          "product_version": "2.5.3"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "failed",
          "guid": "cf-3095a0a264aa5900d79f",
          "identifier": "cf",
          "label": "Small Footprint PAS",
          "product_version": "2.5.3"
        }
      ],
      "deletions": [],
      "finished_at": "2019-05-22T00:23:29.844Z",
      "id": 4,
      "started_at": "2019-05-21T23:16:42.418Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-05-16T01:50:50.640Z",
      "id": 3,
      "started_at": "2019-05-16T01:45:22.438Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "successful",
          "guid": "pivotal-container-service-5c28f63410227c2221c8",
          "identifier": "pivotal-container-service",
          "label": "Enterprise PKS",
          "product_version": "1.4.0-build.31"
        }
      ]
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "pivotal-container-service-5c28f63410227c2221c8",
          "identifier": "pivotal-container-service",
          "label": "Enterprise PKS",
          "product_version": "1.4.0-build.31"
        }
      ],
      "deletions": [],
      "finished_at": "2019-05-15T00:08:32.241Z",
      "id": 2,
      "started_at": "2019-05-14T23:33:58.105Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "deletions": [],
      "finished_at": "2019-05-14T23:29:47.525Z",
      "id": 1,
      "started_at": "2019-05-14T23:13:13.244Z",
      "status": "succeeded",
      "unchanged": [],
      "updates": []
    }
  ]
}

Optionally you should send this TAR file output on every ticket/case your create so support has a great snapshot of what your ENV looks like to help diagnose support issues for you.

telemetry-collector send --path --api-key

For the API-KEY please contact your Pivotal AE or Platform Architect to request that as the Telemetry team issues API key to customer's


More Information 

https://docs.pivotal.io/telemetry/1-1/index.html
Categories: Fusion Middleware

Free Oracle Cloud: 14. Your Oracle Cloud Free Trial has expired (but FREE still running)

Dimitri Gielis - Thu, 2019-10-17 16:19
This post is the last post of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

Today I got an email that my Oracle Cloud account was Expired. While I have an Always FREE Oracle Cloud, when I signed up I also got some extra credits that lasted for a month. Those credits are no longer valid.


When you log in to your Oracle Cloud Dashboard you will get a notification on top too, but nothing to worry about.


It has some consequences tho, on the menu, some options are grayed out. The one I actually use is the Email Delivery, which seems to be grayed out too although normally you should be able to send 1,000 emails per month. So maybe grayed out also means, not full service.


When I checked it out, it said it's part of the paid plan. I remember some discussions at Oracle Open World where they recommend upgrading to a Paid account, but as you only use the Always FREE services, you are not charged.


So I decided to upgrade to a Paid account: Pay As You Go:


You have to provide a Credit Card, but that was a bit of an issue for me. Apparently, Safari is not really working well with this screen, so I switched to Chrome. The next hick-up I had was when I added my AMEX card... it said it was an invalid card.


Then I used my VISA card and that seemed to work well:

Click the Start Paid Account:


Finally, it will say your payment method will be reviewed and after that you are live.


It wasn't immediately clear for me I had to wait for the confirmation email, but when I went to Payment Method again, I saw the review was still in progress:


And a few minutes later I got the email that my account was upgraded:


When you look at your Oracle Cloud Dashboard, there's a cost calculator, so you see how much you have to pay. As long as I use the Always FREE components, I expect the amount to stay 0 :)


But the nice thing now is that you have access to all of Oracle Cloud again (e.g. Email Delivery).
Categories: Development

Funny Gamertags for Xbox That You Don’t Want to Miss

VitalSoftTech - Thu, 2019-10-17 09:49

Being a regular gamer on Xbox player, some cool and funny Gamertags are very easy to come by. In the gaming universe of Xbox, the two essential things you need are 90% smartness and 10% luck, and you will come out the conqueror of the game. However, apart from skill and fortune, you need one […]

The post Funny Gamertags for Xbox That You Don’t Want to Miss appeared first on VitalSoftTech.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator