Pakistan's First Oracle Blog

Subscribe to Pakistan's First Oracle Blog feed
Blog By Fahd Mirza Chughtai
Updated: 13 hours 45 min ago

Oracle DBAs and GDPR

Wed, 2018-04-18 01:32
The General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) is a regulation by which the European Parliament, the Council of the European Union and the European Commission intend to strengthen and unify data protection for all individuals within the European Union (EU).


To bring Oracle database to align with GDPR directive, we have to encrypt all the databases and files on disk, aka encryption at rest (when data is stored). We also have to encrypt the database network traffic. 

The Transparent Data Encryption (TDE) feature allows sensitive data to be encrypted within the datafiles to prevent access to it from the operating system. 

You cannot encrypt an existing tablespace. So if you wish to encrypt existing data, you need to move them from unencrypted tablespaces to encrypted tablespaces. For doing this you can use any of following methods:

i) Oracle Data Pump utility.
ii) Commands like CREATE TABLE...AS SELECT...
iii) Move tables like ALTER TABLE...MOVE..  or rebuild indexes.
iv) Oracle Table Redefinition.

In order encrypt network traffic between client and server we have 2 options from Oracle:

i) Native Network Encryption for Database Connections
ii) Configuration of TCP/IP with SSL and TLS for Database Connections

Native Network Encryption is all about setting sqlnet.ora file and doesn't have the overhead of second option whereyou have to configure various network files at server and client and also have to obtain certificates and create wallet. In first option, there is possibility of not gurantee of encryption, whereas in second there is gurantee of encryption. 
Categories: DBA Blogs

AWS Pricing Made Easy By Simple Monthly Calculator

Wed, 2018-04-18 01:26
With ever changing pricing model and services, its hard to keep track of AWS costing.





If you want to check how much would it cost to have a certain AWS service, tailored to your requirement then use the following Simply Monthly Calculator from AWS.

AWS Price Calculator.
Categories: DBA Blogs

AWS CLI is Boon for DBAs

Sat, 2018-04-14 01:45
For most of production RDS databases , we normally have a related EC2 server to access that RDS database through tools like datapump, sqlplus etc.



RDS is great for point and click but if you want to run your own monitoring or other administration related scripts, you need to have an EC2 instance with AWS CLI installed. For example if you want to check the status of RDS instances or if you want to check if RDS snapshots happened or not today through some check and notify DBA through page or email, you can do that with the AWS CLI.

I will be writing and sharing some RDS related shell scripts using AWS CLI in coming days so stay tuned. 
Categories: DBA Blogs

Oracle DBAs and Meltdown & Spectre (M&S) vulnerability Patch

Thu, 2018-03-29 21:01
So what Oracle DBAs need to do regarding Meltdown & Spectre (M&S) vulnerability patch? 



Well, they should ask the sysadmins to install the patch to the affected versions. They need to get a maintenance window for that. They need to take full backup of Oracle infrastructure and databases before that patching and they should get some baseline of OS metrics to compare it with post patch status of the system. 

Not much there is to do for Oracle DBAs in this regard as this vulnerability is in hardware and is mainly related to sysadmins. Nonetheless, Oracle DBAs should avail this opportunity and install latest CPU. 

The vulnerability is in the chipset itself, unrelated to OS. These vulnerabilities exist at the hardware layer and provide attackers with a way to essentially read the memory used by other processes. Because of the nature of this exploit, the database itself is not currently thought to be a vector in the risk, in-fact the real "fix" for this issue relies on fixing architecture at the chip-set level. 

To mitigate the risk currently without replacing your chips, OS vendors are releasing patches that fundamentally change interactions with memory structures by processes. This is why we're seeing in "Addendum to the January 2018 CPU Advisory for Spectre and Meltdown (Doc ID 2347948.1)" Oracle is releasing patches for Oracle VM (virtual machines are particularly susceptible to this exploit as one "VM" can read the memory processes of another, making this particularly deadly to cloud computing) and Oracle Enterprise Linux. We do understand that Oracle is exploring the possibility that there may be additional patches needed for Oracle Enterprise and Standard edition DBs themselves.


Only for Exadata, It is needed to apply the latest Exadata 12.2.1.1.6 software bundle (the full version number is 12.2.1.1.6.180125.1). Spectre / Meltdown patches are included into it.


The best course of action regarding this would be to get a word from Oracle support for any database related patch. 

Categories: DBA Blogs

Move a Datafile from one ASM Diskgroup to Another Diskgroup

Thu, 2018-03-29 20:49
Following are steps to move a datafile from one ASM diskgroup to another diskgroup in the same ASM instance:




For this example, let's suppose the full path of datafile to be moved is +DATA/test/datafile/test.22.121357823 and datafile number is 11.

Step 1: From RMAN, put datafile 11 offline:

SQL "ALTER DATABASE DATAFILE ''+DATA/test/datafile/test.22.121357823'' OFFLINE";

Step 2: Backup Datafile 11 to Copy using RMAN:

$ rman target /
BACKUP AS COPY DATAFILE 11 FORMAT '+DATA_NEW';

--- Make note the path and name of the generated datafile copy.

Step 3: From RMAN, switch datafile 11 to copy:

SWITCH DATAFILE "+DATA/test/datafile/test.22.121357823" TO COPY;

Step 4: From RMAN, Recover Datafile 11:

RECOVER DATAFILE 11;

Step 5: From RMAN, put datafiles online:

SQL "ALTER DATABASE DATAFILE ''+DATA_NEW/'' ONLINE";

Step 6: From SQLPlus, verify if datafile 11 was correctly switched and was online:

sqlplus / as sysdba
SQL> select file_id,file_name,online_status from dba_data_files where file_id in (11);
Categories: DBA Blogs

Its Cloud Service, Not Oracle 18c RDBMS which is Self-Driving and Autonomous

Thu, 2018-01-04 20:08
Look at the following picture displayed at Oracle's official website here, and you would forgive anyone who would naively believe that the Oracle 18c is a self-driving, autonomous database.





Now, in above mentioned article after reading the title and the first paragraph, one still maintains the notion that Oracle 18c is autonomous and self-driving database. Its only after reading second paragraph carefully, one should understand the true picture.

The second paragraph in that article clearly says that self-driving and autonomous is Oracle Autonomous Database Cloud  powered by Oracle 18c database and not the Oracle database 18c itself.

So this autonomy is about cloud service and not about the RDBMS. This cloud service could very well run on Oracle 12c and say the same or any other version for that matter. DBA's role in such managed database cloud services is still nominal. So if its not on the cloud then DBA's role is more challenging then before as every new version is packed with new features.
Categories: DBA Blogs

ORA-00240: control file enqueue held for more than 120 seconds 12c

Mon, 2017-12-11 22:30
Occasionally, In Oracle 12c database, you may get ORA-00240: control file enqueue held for more than 120 seconds  error in the alert log file.


Well, as this error contains the word control file so it looks scary but if you are getting it infrequently and the instance stays up and running then there is no need to worry and this can be ignored as a fleeting glitch.

But if it starts happening too often and worst case if it hangs or crashes the instance then more than likely its  a bug and you need to apply either the latest PSU or any one-off patch available from Oracle support after raising SR.

There have been some occurances where this error occured due to high number of sessions and conflicting with the ulimit of the operating system resulting in hanging of ASM and RDBMS. Some time it could be due to shared pool latch contention as per few MOS documents.

So if its rare, then ignore it and move on with the life as there are other plenty of things to worry about. If its frequent and a show-stopper then by all means raise a SEV-1 SR with Oracle support as soon as possible.
Categories: DBA Blogs

Enters Amazon Aurora Serverless

Thu, 2017-11-30 23:06
More often than not, database administrators around the technologies have to fight out high load on their databases. It could be ad hoc queries, urgent reports, overflown jobs, or simply high frequency and volume of queries by the end users.

DBAs try their best to do a generous capacity planning to ensure optimal response time and throughput for the end users. There are various scenarios where it becomes very hard to predict the demand. Storage and processing needs in case of unpredictable load are hard to foretell in advance.





Cloud computing offers the promise of unmatched scalability for processing and storage needs. Amazon AWS has introduced a new service which gets closer to that ultimate scalability. Amazon Aurora is hosted relational database service by AWS. You set your instance size and storage need while setting Aurora up. If your processing requirements change, you change your instance size and if you need more read throughput you add more read replicas.

But that is good for the loads we know about and can more ore less predict. What about the loads which appear out of blue? May be for a blogging site, where some post has suddenly gone viral and it has started getting million of views instead of hundreds? And then the traffic disappears after some time suddenly just like it appeared out of nowhere and may be after some days it happens for some another post?

In this case if you are running Amazon Aurora, it would be fairly expensive to just increase the instance size or read replicas in the anticipation that some traffic burst would come. It might not, but then it might.

In front of this uncertainty, enters Amazon Aurora Serverless. With this Serverless Aurora, you don't select your instance size. You simply specify an endpoint and then all the queries are routed to that endpoint. Behind that endpoint lies a a warm proxy fleet of database capacity which can scale as per your requirements within 5 seconds.

It's all on-demand and ideal for transient spiky loads. What's more sweet is that billing is on second by second basis and deals in Aurora capacity units and minimum is 1-minute for each newly address resource.
Categories: DBA Blogs

Query Flat Files in S3 with Amazon Athena

Tue, 2017-11-21 21:01
Amazon Athena enables you to access data present in flat files stored in S3 (Simple Storage Service) as if it were in a table in the database. That and you don't have to set up any server or any other software to accomplish that.

That's another glowing example of being 'Serverless.'


So if a telecommunication has hundreds of thousands or more call detail record file in CSV or Apache Parquet or any other supported format, it can just be uploaded to S3 bucket, and then by using AWS Athena, that CDR data can be queried using well known ANSI SQL.

Ease of use, performance, and cost savings are few of the benefits of AWS Athena service. True to the Cloud promise, with Athena you are charged for what you actually do; i.e. you are only charged for the queries. You are charged $5 per terabyte scanned by your queries. Beyond S3 there are no additional storage costs.

So if you have huge amount of formatted data in files and all you want to do is to query that data using familiar ANSI SQL then AWS Athena is the way to go. Beware that Athena is not for enterprise reporting and business intelligence. For that purpose we have AWS Redshift. Athena is also not for running highly distributed processing frameworks such as Hadoop. For that purpose we have AWS EMR. Athena is more suitable for running interactive queries on your supported formatted data in S3.

Remember to keep reading the AWS Athena documentation as it will keep improving, lifting limitations, and changing like everything else in the cloud.
Categories: DBA Blogs

List of Networking Concepts to Pass AWS Cloud Architect Associate Exam

Wed, 2017-11-08 16:31
Networking is a pivotal concept in cloud computing. Knowing it is a must to be a successful Cloud Architect. Of course you won't be physically peeling the cables to put RJ45 connectors on but you must know various facets of logical networking.


You never know what exactly gonna be in the exam but that's what exams are all about. In order to prepare for AWS Cloud Architect Associate exam you must thoroughly read and understand the following from AWS documentation:


Before you read above, it would be very beneficial if you also go and learn following networking concepts:

  • LAN
  • WAN
  • IP addressing
  • Difference between IPV4 and IPV6
  • CIDR
  • SUBNET
  • VPN
  • NAT
  • DNS
  • OSI Layers
  • TCP
  • UDP
  • ICMP
  • Router, Switch
  • HTTP
  • NACL
  • Internet Gateway
  • Virtual Private Gateway
  • Caching, Latency
  • Networking commands like Route, netstat, ping, tracert etc
Feel free to add any other network concept in comments which I might have missed.
Categories: DBA Blogs

Guaranteed Way to Pass AWS Cloud Architect Certification Exam

Tue, 2017-11-07 06:00
Today and for the sometime to come, one of the hottest IT certification to hold is AWS Cloud Architect Certification. There are various reasons for that:



  • If you pass it, it really means you know the stuff properly
  • AWS is the Cloud platform of choice world over and its not going anywhere
  • There is literally a mad rush out there as companies scramble to shift or extend their infrastructure to cloud to stay relevant and to cut costs.
  • There is a huge shortage of professional with theoretical and hands-on know-how of Cloud and this shortage is growing alarmingly.
So its not surprising that Sysadmins, developers, DBAs and other IT professionals really yearning to achieve Cloud credentials and there is no better way to do that other than getting AWS Certified.

So is there any  Guaranteed Way to Pass AWS Cloud Architect Certification Exam?

I say Yes and here is the way:

Read AWS Documentation about following AWS Services. Read about these services and read them again and again and then again. Learn them like you know your name. Get a free account and then play with these services. When you feel comfortable enough with these services and can recite them to anyone inside out then go ahead sit in exam and you will pass it for sure. So read and learn all services under sections:


  • Compute
  • Storage
  • Database 
  • Network & Content Delivery
  • Messaging
  • Identity and Access Management
Also make sure to read FAQs of all above services. Also read and remember what AWS Kinesis, WAF, Data Pipeline, EMR, Workspace are. No details are necessary for these ones but just what they stand for and what they do.

Best of Luck.
Categories: DBA Blogs

Passed the AWS Certified Solutions Architect - Associate Exam

Tue, 2017-11-07 05:11
Well, it was quite an enriching experience to go through taking the AWS certification exam and I am humbled to say that I passed it. It was first time, I underwent any AWS exam and I must say that quality was high and it was challenging and interesting enough. 

I will be writing soon as how I prepared and what are my tips for passing this exam.

Good night for now.
Categories: DBA Blogs

CIDR for Dummies DBA in Cloud

Sun, 2017-10-01 02:00
For DBAs of Cloud, its imperative to learn various networking concepts and CIDR is one of them. Without going into much detail, I will just post here quick note as what CIDR is and how to use it.



A CIDR looks something like this:

10.0.0.0/28

The 10.0.0.0/28 represents range of IP addresses, and no its NOT from 10.0.0.0 to 10.0.0.28. Here is what it is:

So in order to know how many IP address are in that IP range and from where it starts and where it ends, the formula is :

2 ^ (32 - )

So for the CIDR 10.0.0.0/28 :

2 ^ (32 - 28) = 2 ^ 4 = 2 * 2 * 2* 2 = 16

So in CIDR range 10.0.0.0/28 , we have 16 IP addresses in which

Start IP = 10.0.0.0
End IP  = 10.0.0.15



Also cloud providers normally reserve few IPs out of this CIDR range for different services like DNS, NAT etc. For example, AWS reserves first 4 and last IP of any CIDR range. So in our example , we would just have 10 IP addresses to work with in AWS.

So in case of AWS, we would have a region where we would have a VPC. CIDR is assigned to that VPC. In that VPC, for example we would have 2 subnets. We can distribute our 10 IPs from our CIDR 10.0.0.0/28 to our both subnets. Below I am giving 5 IPs to each subnet. A subnet is just a logical separate network.

For example we can give:

Subnet 1:

10.0.0.5 to 10.0.0.9

Subnet 2:

10.0.0.10 to 10.0.0.14 

Hope that helps.

PS. And oh CIDR stands for Classless Inter-Domain Routing (or Supernetting)
Categories: DBA Blogs

Idempotent and Nullipotent in Cloud

Tue, 2017-09-19 04:50
I was going through the documentation of Oracle Cloud IaaS, when I came across the vaguely familiar term Idempotent.



One great thing which I have felt very strongly with all this Cloud-mania is the recall of various theoretical computing concepts which we learned/read in university courses way back. From networking through web concepts to operating system; there are plethora of concepts which are coming back to be in practice very actively in everyday life of cloud professionals.

Two such mouthful words were Idempotent and Nullipotent. These are types of actions. Difference between Idempotent and Nullipotent action is the result they return when performed.

In simple terms;

    When executed an Idempotent action would provide a result first time and then this result would remain same, no matter how many times the action is repeated after that first time.
  
    An Nullipotent action would always provide same result whether executed several times or not executed at all.
 
So in terms of Cloud where REST (Representational State Transfer) APIs and HTTP (Hyper Text Transfer Protocol) are norm, these 2 concepts of Idempotent and Nullipotent are very important. In order to manage resources in cloud (through URI), there are various HTTP actions which could be performed. Some of these actions are Idempotent and some are Nullipotent.

Like GET action of HTTP is nullipotent. No matter how many times you execute this, it doesn't affect state of the resource and would return same result. And Put is Idempotent action of HTTP which would change the state of resource first time its executed and all subsequent executions of same PUT action would be like as first time.
Categories: DBA Blogs

SRVCTL Status Doesn't Show RAC instances Running Unlike SQLPLUS

Mon, 2017-09-18 18:34
Yesterday, I converted a single instance 12.1.0.2.0 physical standby database to a cluster database with 2 nodes.

After converting that to RAC database, I brought both instances up in mount state on both nodes and they came up fine and I started managed recovery on one node and it started working perfectly fine and got in sync with the primary.


Then I added them as a cluster resource by srvctl like this:

$ srvctl add database -d mystb -o /d01/app/oracle/product/12.1.0.2/db_1 -r PHYSICAL_STANDBY -s MOUNT
$ srvctl add instance -d mystb -i mystb1 -n node1
$ srvctl add instance -d mystb -i mystb2 -n node2

But srvctl status didnt show it running:

$ srvctl status database -d mystb -v
Instance mystb1 is not running on node node1
Instance mystb2 is not running on node node2

While from SQLPLUS, I could see both instances mounted:

SQL> select instance_name,status,host_name from gv$instance;

INSTANCE_NAME     STATUS       HOST_NAME
---------------- ------------ ----------------------------------------------------------------
mystb1             MOUNTED      node1
mystb2              MOUNTED      node2

So I needed to start database in srvctl (thought it was already started and mounted) just to please srvctl:

So I ran this:

$ srvctl start database -d mystb

The command didn't do anything but change the status of resource on the cluster. After running above, it worked:


$ srvctl status database -d mystb -v
 Instance mystb1 is running on node node1
 Instance mystb2 is running on node node2
Categories: DBA Blogs

Attended Google Cloud Summit in Sydney

Wed, 2017-09-13 00:50
The day event at picturesque Pier One Autograph Collection just under the shadow of Sydney's iconic harbor bridge was very interesting to say the least.


Keypoints from the event:

  • Google is investing heavily in APAC region for cloud
  • Sydney region for Google Cloud Platorm is up and running.
  • After 3 or 4 years, it will be all about containers.
  • Machine Learning is a big thing and at last here in true sense.
  • Also lots of tips and advices for the partners
  • Training and security are top concerns for the customers for cloud
  • Companies have simply no reason to manage their own data centers when cloud is here.
Machine learning is terrific especially in this demo by Google where Google's Deepmind self teaches walking.
Categories: DBA Blogs

SPX86-8002-VP - The /var/log filesystem has exceeded the filesystem capacity limit.

Wed, 2017-09-13 00:38
The following error message sounds ominous:

SPX86-8002-VP - The /var/log filesystem has exceeded the filesystem capacity limit.

and from Cloud control:




A processor component is suspected of causing a fault with a 100% certainty. Component Name : /SYS/SP Fault class : fault.chassis.device.misconfig

But in fact, most  of the time its not as bad as it sounds.

More often than not, rebooting the ILOM does the trick and then this error goes away.

Just go to /SP in ILOM and reset it. The next ILOM snapshot would clear it away which takes sometime.
Categories: DBA Blogs

Presented at CLOUG OTN Day 2017, Chile stop of the 2017 LAD OTN Tour

Sun, 2017-07-30 20:37
Amidst lots of Empanadas and Lomo Saltodos, I presented at CLOUG OTN Day 2017, Chile stop of the 2017 LAD OTN Tour last week and it was great to see a very passionate audience.




Despite of long flight and opposite time zone difference, Santiago, Chille came out very welcoming and lively. The event was very well organized and was studded with international speakers including fellow Pythianite Bjoern Rost, and various other well known speakers like Markus Michalewicz, Ricardo Gonzalez, Craig Shallahamer and so on.





Categories: DBA Blogs

Oracle Cloud Machine ; Your Own Cloud Under Your Own Control

Fri, 2017-07-07 04:12
Yes, every company wants to be on cloud but not everyone wants that cloud to be out there in wild no matter how secure it is. Some want their cloud to be trapped within their own premise, under their own control.




Enters Oracle Cloud Machine.

Some of the key reasons why this would make sense are sovereignty, residency, compliance, and other business requirements. Moreover, the cloud benefits would be there like turn key solutions, same IaaS and PaaS environments for development, test and production.

Cost might be a factor here for some organizations so a hybrid solution might be a go for majority of corporations.Having a private cloud machine and also having a public cloud systems would be the solution for many. One advantage here would be that the integration  of this private cloud with public one would be streamed lined.
Categories: DBA Blogs

Oracle GoldenGate Cloud Service

Wed, 2017-06-28 18:37
Even on Amazon AWS, for the migration of Oracle databases from on-prem to Cloud, my tool of choice is GoldenGate. The general steps I took for this migration was to create extract on source in on-prem, which sent data to replicat running in AWS Cloud in EC2 server, which in turn applied data to cloud database in RDS.




I was intrigued to see this new product from Oracle which is Oracle GoldenGate Cloud Service (GGCS).

So in this GGCS, we have extract, extract trail, and data pump running in the on-prem, which sends data to a Replication VM node in Oracle Cloud. This Replication VM node has a process called as Collector which collects incoming data from the on-prem. Collector then writes this data to a trail file from which data is consumed by a Replicat process and then applied to the cloud database.

This product looks good as it leverages existing robust technologies and should become default way to migrate or replicate data between oracle databases between on-prem and cloud.
Categories: DBA Blogs

Pages