Feed aggregator

New Oracle Banking APIs Help Banks Build Faster, Better Services

Oracle Press Releases - Tue, 2018-04-10 07:00
Press Release
New Oracle Banking APIs Help Banks Build Faster, Better Services Packaged API solution provides more than 1,500 ready, functional APIs for payments, retail and corporate banking

Oracle Industry Connect—New York, NY.—Apr 10, 2018

Oracle Financial Services today announced the general availability of a new solution Oracle Banking APIs targeted at helping banks embarking on an Open Banking journey. Banks can take advantage of the ready to consume APIs to accelerate their initiatives to tap new opportunities presented by open banking and regulations like PSD2. Oracle Banking APIs enable banks to build seamless partnerships with third-party technology organizations, easily integrate with corporate client applications and reduce the time between API ideation and delivery.

Banks today are faced with the daunting task of identifying underlying systems, designing customized APIs and exposing them for consumption. This process can take between 12 to 24 months, depending on the complexity of the banks’ IT landscape. Oracle Banking APIs reduces this effort by providing banks a prebuilt repository of more than 1,500 REST APIs which have been matured based on Oracle’s experience of servicing 600+ banks across 140 countries over two decades.

“The early mover advantage is critical for success for banks in the open banking era,” said Chet Kamat, Senior Vice President, Oracle Financial Services. “Oracle Banking APIs is designed to bring acceleration, and help banks gain competitive advantage in the market.”

“At Weatherbys Bank, we believe in providing personalised service, considering every customer a unique individual. Oracle Banking APIs will enable us to provide innovative tailored services, while managing customer consent, identity and security,” said Suzie Batten, Chief Technology Officer, Weatherbys Bank. “With Oracle Banking APIs, Weatherbys can comply with PSD2 and Open Banking rules in the UK. We look forward to simplifying our customers’ financial transactions and enhance our value in making banking an effortless experience.”

Banks that use Oracle Banking APIs can take advantage of pre-integration with existing Oracle core banking systems, Oracle Banking Digital Experience, Oracle Identity Cloud Service and Oracle API Platform Cloud Service. The solution is also capable of working with any core banking or online banking solution, identity management and API gateway solution. 

Key features of Oracle Banking APIs:

  • More than 1,500 APIs in the sphere of payments, retail and corporate banking including specialized functionalities such as multiproduct originations, retail customer financial insights, bulk payments, and trade finance

  • Dynamic customizable APIs that understand business logic with intelligent pre- and post-processing capabilities

  • Centralized change management for faster-time-to-market

  • Ease of integration with any existing banking solutions

Oracle Financial Services Hackathon enables Fintechs on Oracle Banking APIs

As part of its Open Banking initiative Oracle Financial Services teamed with Oracle Cloud Startup Accelerator (OCSA) & Oracle Scaleup Ecosystem to host its first Fintech focused Hackathon. Thirteen Fintech startups were carefully selected to participate and address critical domain areas. They were provided a API Sandbox of Oracle Banking APIs to develop use cases. The Fintech participants cover areas such identity verification, AI, Payments, credit scoring, personal financial management, account receivables and marketplace.

The global initiative provides Fintechs access to Oracle’s ecosystem of banking clients across 140+ countries and gives banks a set of Fintechs which are ready to integrate to their landscape and solve real world problems–all powered by Oracle Banking API. Fintechs from the Hackathon are presenting at Oracle Industry Connect.

The Fintech participants include: Signzy, Active AI, Teknospire, Zwift Pay, KapitalWise, BnesisQalize, Happay, Statanalytics, Lifesaver, NumberzRaisin and Unscrambl.

Contact Info
Judi Palmer
Oracle Corporation
+650.784.7901
judi.palmer@oracle.com
Brian Pitts
Hill+Knowlton Strategies
+1 312 475 5921
brian.pitts@hkstrategies.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Oracle Industry Connect

For more information about how Oracle is committed to empowering organizations through best-in-class, industry-specific business solutions, visit oracle.com/industries. To learn more about Oracle Industry Connect 2018, go to oracle.com/oracleindustryconnect.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Judi Palmer

  • +650.784.7901

Brian Pitts

  • +1 312 475 5921

Oracle Textura Payment Management Surpasses $500 Billion in Construction Value Managed on System

Oracle Press Releases - Tue, 2018-04-10 07:00
Press Release
Oracle Textura Payment Management Surpasses $500 Billion in Construction Value Managed on System Owners, Developers and Contractors Increase Efficiency and Control with Cloud Solution for Subcontractor Payment Management

Oracle Industry Connect—New York, NY.—Apr 10, 2018

Oracle Construction and Engineering today announced that its Oracle Textura Payment Management Cloud Service since its inception has now been used to manage subcontractor payments on projects representing more than $500 billion in construction value.

By streamlining, automating, and standardizing payment management activities—including invoicing, compliance management, approvals, lien waiver collection, and disbursement—Oracle Textura Payment Management helps improve payment outcomes and enable organizations to scale operations for growth. General contractors, subcontractors, and project owners/developers can all benefit from increased productivity, reduced risk, and improved communication across stakeholders from the application, which was launched in 2006.

“Our customers rely on Oracle Textura Payment Management to improve efficiency, enhance visibility, mitigate risk, and improve cash flow,” said Mike Sicilia, general manager, senior vice president, Oracle Construction and Engineering. ”Reaching this milestone is a testament to the value our application brings to the industry.”

The $500 billion in construction value represents a significant number of projects, documents, and payments that Oracle Textura Payment Management has been used to manage since its launch:

  • 43,000+ projects
  • Nearly 10 million documents created and electronically signed
  • $6.2 billion worth of subcontractor payments managed on a monthly basis
Contact Info
Judi Palmer
Oracle Corporation
+1.650.784.7901
judi.palmer@oracle.com
Kristin Reeves
Blanc & Otus
+1.925.787.6744
kristin.reeves@blancandotus.com
About Oracle Construction and Engineering

Oracle Construction and Engineering delivers best-in-class project management solutions that empower organizations to proactively manage projects, gain complete visibility, improve collaboration, and manage change. Its cloud-based solutions for global project planning and execution help improve strategy execution, operations, and financial performance. For more information, please visit www.oracle.com/construction-and-engineering.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com

Oracle Industry Connect

For more information about how Oracle is committed to empowering organizations through best-in-class, industry-specific business solutions, visit oracle.com/industries. To learn more about Oracle Industry Connect 2018, go to oracle.com/oracleindustryconnect.

Trademark

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1.650.784.7901

Kristin Reeves

  • +1.925.787.6744

New Oracle Health Sciences mHealth Connector Cloud Service Enables Digital Clinical Trials at Scale and Delivers New Level of Patient Centricity

Oracle Press Releases - Tue, 2018-04-10 07:00
Press Release
New Oracle Health Sciences mHealth Connector Cloud Service Enables Digital Clinical Trials at Scale and Delivers New Level of Patient Centricity

Oracle Industry Connect—New York, NY.—Apr 10, 2018

Oracle Health Sciences today unveiled its new Oracle Health Sciences mHealth Connector Cloud Service, enabling clinical study teams to remotely collect e-Source data from patient sensors, wearables and apps for use in their clinical trials, while delivering a new level of patient engagement and centricity.

The rise of mobile health technologies including mobile sensors, patient engagement apps and telemedicine are reshaping how drugs are developed by improving the efficiency of clinical trials. Oracle’s mHealth Connector Cloud Service makes it easy to connect existing clinical systems with a wide variety of e-Sources, enabling therapeutic teams to obtain more accurate and rich patient data, improve adherence to study protocols, better understand the safety and efficacy of trial drugs and improve patient centricity with remote patient monitoring.

“We are thrilled to announce our new mHealth Connector Cloud Service as it holds great promise in speeding clinical trials and bringing more drugs to market faster. Being able to take what used to be patient-recorded data and outcomes via paper forms and site visits can now be done via mobile health sensors and wearables that have the potential to shorten trial times and reduce costs, while allowing sick patients to remain in the comfort of their homes versus traveling to and from trial sites. To improve patient enrollment in clinical trials, study teams must put the patient at the center of everything they do, and emerging technologies such as wearables and sensors hold the key,” said Steve Rosenberg, general manager, Oracle Health Sciences.

The mHealth Connector Cloud Service supports a number of integration approaches and Oracle Health Sciences is currently exploring integration efforts with a wide ecosystem of mobile health companies such as Validic, MC10 and CMT as well as solution integrators and developers such as Accenture and POSSIBLE Mobile.

“Accenture is committed to advancing new approaches in clinical research through collaborations with our clients and Oracle Health Sciences.  Together we’re developing ways to transform clinical trial processes by creatively applying digital capabilities,” said Kevin Julian, senior managing director, Accenture Life Sciences North America.  “We believe mHealth solutions will allow faster and easier integration of a wide range of devices and sensors in real time -- streamlining data collection and enhancing the patient experience.”   

“At CMT, our CleverCap product family blends the best technology with the connected patient to help track and improve medication dosing habits in clinical trials. Our collaboration with Oracle Health Sciences enables a seamless real-time display of dosing patterns data into the eClinical systems that clinical trial sites and clinical teams utilize, alongside other essential clinical trial data,” said Moses Zonana, CEO of CMT. 

“Our business is focused on a gathering complex physiological data direct from our wearable sensors worn by research subjects. We’re excited to collaborate with Oracle Health Sciences, and seamlessly flow our data direct into Oracle’s clinical trial applications, and be part of the new paradigm of Digital Clinical Trials,” said Scott Pomerantz, CEO and President from MC10.

“Using the Oracle mHealth Connector Cloud Service, we were able to easily transfer patient data from our Apple ResearchKit apps to the clinical trial cloud application. The integration process was straightforward and painless,” said Jay Graves, CTO of POSSIBLE Mobile.

“Designing a clinical trial to better engage participants requires utilization of new data sources. By automating the passive collection of data via digital health devices and apps, researchers are able to access more accurate, diverse, and objective data—enabling sponsors to better manage participant engagement and program adherence. Validic joins Oracle in a collaboration to merge vital data sources and novel endpoints with a solution that enables true patient-centricity and efficiency in clinical trials," said Drew Schiller, CEO, Validic.

Contact Info
Valerie Beaudett
Oracle
6504007833
valerie.beaudett@oracle.com
Phebe Shi
Burson Marsteller
4152163067
phebe.shi@bm.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com

About Oracle Health Sciences

Oracle Health Sciences breaks down barriers and opens new pathways to unify people and processes to bring new drugs to market faster. As the number one vendor in Life Sciences (IDC, 2017), the number one provider of eClinical solutions (Everest Group, 2017) and powered by the number one data management technology in the world (Gartner, 2018), Oracle Health Sciences technology is trusted by 29 of the top 30 pharma, 10 of the top 10 biotech and 10 of the top 10 CROs for clinical trial and safety management around the globe.

Oracle Industry Connect

For more information about how Oracle is committed to empowering organizations through best-in-class, industry-specific business solutions, visit oracle.com/industries. To learn more about Oracle Industry Connect 2018, go to oracle.com/oracleindustryconnect.

Trademark

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • 6504007833

Phebe Shi

  • 4152163067

Oracle Utilities Achieves $2 Billion in Energy Cost Savings for Utilities’ Customers

Oracle Press Releases - Tue, 2018-04-10 07:00
Press Release
Oracle Utilities Achieves $2 Billion in Energy Cost Savings for Utilities’ Customers National Grid helps Opower Energy Efficiency programs hit milestones in both customer savings and customer care

Oracle Industry Connect—New York, NY.—Apr 10, 2018

Oracle Utilities today announced that its Opower Energy Efficiency programs have generated a total of $2 billion in utility bill savings for customers over the past decade, along with making great strides in customer engagement.

Launched in 2008, Opower Energy Efficiency programs have been implemented at more than 100 electric and gas utilities around the globe, motivating customers to save more than 17 TWh of energy through multi-channel, personalized communications. These energy savings could cool 8.5 million homes, light over 17 million homes or power 1.2 million homes for a year.

This milestone is a shared accomplishment among many utilities, some of whom have run Opower Energy Efficiency programs for the better part of the decade. One partner, National Grid, has played an integral role, contributing more than 1 TWh of electricity savings and 45 million therms of gas savings.

National Grid’s program started as a small implementation of Opower Home Energy Reports (HERs) in New York, and is today a multi-state engagement that touches 2.6 million customers in New York, Massachusetts and Rhode Island. In 2017 alone, National Grid customers received over 24 million personalized reports and 143,000 high bill alerts. The states where National Grid operates - New York, Massachusetts, and Rhode Island – consistently rank in the top five states nationwide for energy efficiency.

“National Grid has leveraged a range of Opower solutions, including the Energy Efficiency program, to transform their relationship with customers and correspondingly improve their business overall. In a highly competitive market, customer satisfaction is paramount for success, and it all starts with personalized customer engagement. We applaud National Grid for their commitment to providing the best possible experience for their customers and are gratified by the integral role that they have played in our savings success story,” said Rodger Smith, Senior Vice President and General Manager of Oracle Utilities.

In addition to driving savings, National Grid’s HER reports have also improved customer sentiment. Findings from a recent telephone survey show that 81% of National Grid customers actively read the reports, report recipients score National Grid 7-10% higher across a variety of key customer sentiment metrics and recipients show higher familiarity with National Grid energy efficiency programs.

“National Grid deployed our first Opower Energy Efficiency program back in 2009 to help customers better understand and manage their energy use,” said John Isberg, Vice President of Customer Solutions, National Grid. “Since then, we have seen significant energy savings and a positive impact on our customer sentiment as they look for greater control over their energy spend. This program is essential to our future success in navigating the rapidly changing energy landscape, especially as customer expectations continue to shift.”

Over the past several years, National Grid has augmented their energy efficiency program with several other Opower solutions to provide a better overall experience to their customers.  These include Digital Self Service Energy Management web tools, which give customers personalized energy insights and recommendations, and segmented campaigns to improve participation in programs such as income assistance.

Additionally, in response to the New York Reforming the Energy Vision (REV) initiative, National Grid deployed an innovative Oracle Utilities Opower Peak Management program in their Clifton Park territory. This program, called Peak Time Rewards, encourages customers to reduce peak electric load and overall electric and gas consumption with reward points that can be exchanged for gift cards. This unique program enables National Grid to offer a price signal to their customers without making changes to their billing system.

Contact Info
Valerie Beaudett
Oracle
6504007833
valerie.beaudett@oracle.com
Bronwyn Wallace
Hill + Knowlton Strategies
7137243627
bronwyn.wallace@hkstrategies.com
About National Grid

National Grid (LSE: NG; NYSE: NGG) is an electricity, natural gas, and clean energy delivery company that supplies the energy for more than 20 million people through its networks in New York, Massachusetts, and Rhode Island. It is the largest distributor of natural gas in the Northeast. National Grid also operates the systems that deliver gas and electricity across Great Britain. National Grid is transforming its electricity and natural gas networks to support the 21st century digital economy with smarter, cleaner, and more resilient energy solutions. Read more about the innovative projects happening across our footprint in The Democratization of Energy, an eBook written by National Grid’s US president, Dean Seavers. For more information please visit our website. You can also follow us on Twitter, watch us on YouTube, friend us on Facebook, find our photos on Instagram.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Oracle Industry Connect

For more information about how Oracle is committed to empowering organizations through best-in-class, industry-specific business solutions, visit oracle.com/industries. To learn more about Oracle Industry Connect 2018, go to oracle.com/oracleindustryconnect.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • 6504007833

Bronwyn Wallace

  • 7137243627

Oracle Retail and Adyen Deliver Unified and Global Retail Consumer Payments

Oracle Press Releases - Tue, 2018-04-10 07:00
Press Release
Oracle Retail and Adyen Deliver Unified and Global Retail Consumer Payments Adyen Becomes Oracle PartnerNetwork Gold Level Partner Extending Value of POS and Omnichannel Investments

Oracle Industry Connect—New York, NY.—Apr 10, 2018

Oracle today announced that it has awarded Adyen, the payment platform of choice for the world’s leading companies, a Gold level member of Oracle PartnerNetwork (OPN). By attaining Gold Level membership, Oracle has recognized Adyen’s ability to deliver complementary and unified payment gateways for the Oracle Retail Xstore solutions. With this relationship, Oracle and Adyen now offer a best in class, global consumer payments solution to their retail customers.

“In our global consumer research Retail in 4 Dimensions, we discovered that the global consumer has rising expectations for fast, smarter payment options. In fact, 57% of global consumers want instant one-click checkout online and 60% want mobile payment options in-store,” said Ray Carlin, Senior Vice President and General Manager of Oracle Retail. “Adyen delivers that customer experience and we are pleased to extend the Gold level member status after having successfully implemented Oracle and Adyen at multiple global brands.”

The integration between Adyen and Oracle Retail is a great example of how Oracle is delivering additional value for retail customers through integrations that extend the value of POS and Omnichannel investments. In addition to providing multiple point-of-service hardware offerings, Oracle Retail offers a fully integrated portfolio of hardware and software solutions that enable retailers to streamline managerial tasks, increase speed of service and elevate the consumer experience.

“Successful retailers should focus on delivering great customer experiences across all channels. With this partnership, retailers will be better equipped to meet rising shopper expectations wherever and however they want to pay,” said Roelant Prins, Chief Commercial Officer, Adyen. “The partnership enables a seamless end-to-end consumer experience anywhere in the world.”

On April 10-11, Oracle will convene a global community of retail leadership at Oracle Industry Connect in New York, NY to discuss adapting to market changes, simplifying operations and empowering authentic brand experiences. At this event, Oracle and Adyen will present how this relationship will benefit retail merchants worldwide.

Available Demonstrations:

  • Promotions and Entitlements Across Channels: Oracle Retail enables a single view of the customer that can increase engagement in a loyalty scheme and enable effective marketing campaigns. In this demonstration, you can expect to see how our solution can be used to reward and delight with highly targeted promotions and loyalty awards that span channels. The demonstration is scheduled to start with a customer’s journey online where a purchase triggers a loyalty award that is delivered directly to their mobile device and concludes with an in-store experience where the customer can use their device to apply the award to an in-store transaction with Adyen payments platform. The demonstration features capabilities from Oracle Commerce Cloud, Oracle Marketing Cloud, Oracle Retail Customer Engagement, Oracle Retail Xstore and Adyen payments platform.
  • Innovation on the Cloud - Retail.com Chatbot: Understand how Oracle Retail is embedding artificial intelligence and machine learning into the customer journey in this innovative demonstration. You can expect to see a customer being re-targeted after abandoning a basket on-line and invited to a Facebook Messenger chat session with a chatbot developed using Oracle Mobile Cloud Enterprise with intelligent bots. The customer is rewarded with loyalty points for joining the chat, can understand further product and order details from the chatbot and finally can place an order with Adyen tokenization. Returning shoppers can pay securely with a tap of their finger in the chat session. This demonstration features capabilities from Oracle Commerce Cloud, Oracle Mobile Cloud Enterprise, Oracle Retail Customer Engagement, Oracle Retail Order Broker, Oracle Retail Order Management and Adyen tokenization.
 

To learn more about Oracle Industry Connect 2018 and register to attend visit: www.oracle.com/oracleindustryconnect/

For more information, visit our website at https://www.adyen.com/

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Adyen

Adyen is the payments platform of choice for the world’s leading companies. The only provider of a modern end-to-end infrastructure connecting directly to Visa, Mastercard, and consumers’ globally preferred payment methods, Adyen delivers frictionless payments across online, mobile, and in store. With offices around the world, Adyen serves 8 of the 10 largest US Internet companies and many worldwide retailers. Customers include Facebook, Uber, L’Oreal, Casper, Bonobos, Netflix, and Spotify.

For more information, visit our website at https://www.adyen.com/

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle PartnerNetwork

Oracle PartnerNetwork (OPN) is Oracle’s partner program that provides partners with a differentiated advantage to develop, sell and implement Oracle solutions. OPN offers resources to train and support specialized knowledge of Oracle’s products and solutions and has evolved to recognize Oracle’s growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to be recognized and rewarded for their investment in Oracle Cloud. Partners engaging with Oracle will be able to differentiate their Oracle Cloud expertise and success with customers through the OPN Cloud program – an innovative program that complements existing OPN program levels with tiers of recognition and progressive benefits for partners working with Oracle Cloud. To find out more visit: http://www.oracle.com/partners

Oracle Industry Connect

For more information about how Oracle is committed to empowering organizations through best-in-class, industry-specific business solutions, visit oracle.com/industries. To learn more about Oracle Industry Connect 2018, go to oracle.com/oracleindustryconnect.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

Deploy a Cloudera cluster with Terraform and Ansible in Azure – part 1

Yann Neuhaus - Tue, 2018-04-10 05:13

Deploying a Cloudera distribution of Hadoop automatically is very interesting in terms of time-saving. Infrastructure as Code tools such as Ansible, Puppet, Chef, Terraform, allow now to provision, manage and deploy configuration for large clusters.

In this blog posts series, we will see how to deploy and install a CDH cluster with Terraform and Ansible in the Azure cloud.

The first part consists of provisioning the environment with Terraform in Azure. Terraform features an extension to interact with cloud providers such as Azure and AWS. You can find here the Terraform documentation for the Azure module.

Desired architecture

 

Azure_architecture

Above a representation of the wished architecture for our CDH environment. 5 nodes for a testing infrastructure, including a Cloudera manager node, a second master node for the Hadoop Secondary NameNode and 3 workers.

 Prerequisites

Terraform must be installed on your system. https://docs.microsoft.com/en-us/azure/virtual-machines/linux/terraform-install-configure#install-terraform

Generate a Client ID and a Client Secret from Azure CLI to authenticate in Azure with Terraform.

1. Sign in to administer your Azure subscription:

[root@centos Terraform]# az login

2. Get the subscription ID and tenant ID:

[root@centos Terraform]# az account show --query "{subscriptionId:id, tenantId:tenantId}"

3. Create separate credentials for TF:

[root@centos Terraform]# az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/${SUBSCRIPTION_ID}"

4. Save the following information:

  • subscription_id
  • client_id
  • client_secret
  • tenant_id

Now we are ready to start using Terraform with AzureRM API.

Build your cluster

With Terraform, we will provision the following resources in Azure:

  • A resource group – “Cloudera-Cluster”
  • 1 virtual network – “cdh_vnet”
  • 1 network security group – “cdh-nsg”
  • 1 storage account – “dbistorage”
  • 5 network interfaces – “instance_name_network_interface”
  • 5 Public / Private IP – “cdh-pip1-4″

First, we will create a variable file, which contains all variables needed without specific values.  The values are specified in the var_values.tfvars file.

 variables.tf
/* Configure Azure Provider and declare all the Variables that will be used in Terraform configurations */
provider "azurerm" {
  subscription_id 	= "${var.subscription_id}"
  client_id 		= "${var.client_id}"
  client_secret 	= "${var.client_secret}"
  tenant_id 		= "${var.tenant_id}"
}

variable "subscription_id" {
  description = "Enter Subscription ID for provisioning resources in Azure"
}

variable "client_id" {
  description = "Enter Client ID for Application created in Azure AD"
}

variable "client_secret" {
  description = "Enter Client secret for Application in Azure AD"
}

variable "tenant_id" {
  description = "Enter Tenant ID / Directory ID of your Azure AD. Run Get-AzureSubscription to know your Tenant ID"
}

variable "location" {
  description = "The default Azure region for the resource provisioning"
}

variable "resource_group_name" {
  description = "Resource group name that will contain various resources"
}

variable "vnet_cidr" {
  description = "CIDR block for Virtual Network"
}

variable "subnet1_cidr" {
  description = "CIDR block for Subnet within a Virtual Network"
}

variable "subnet2_cidr" {
  description = "CIDR block for Subnet within a Virtual Network"
}

variable "vm_username_manager" {
  description = "Enter admin username to SSH into Linux VM"
}

variable "vm_username_master" {
  description = "Enter admin username to SSH into Linux VM"
}

variable "vm_username_worker1" {
  description = "Enter admin username to SSH into Linux VM"
}

variable "vm_username_worker2" {
  description = "Enter admin username to SSH into Linux VM"
}

variable "vm_username_worker3" {
  description = "Enter admin username to SSH into Linux VM"
}

variable "vm_password" {
  description = "Enter admin password to SSH into VM"
}

 

var_values.tfvars
subscription_id 	= "xxxxxxx"
client_id 		= "xxxxxxx"
client_secret 		= "xxxxxxx"
tenant_id 		= "xxxxxxx"
location 		= "YOUR-LOCATION"
resource_group_name     = "Cloudera-Cluster"
vnet_cidr 		= "192.168.0.0/16"
subnet1_cidr 		= "192.168.1.0/24"
subnet2_cidr		= "192.168.2.0/24"
vm_username_manager 		= "dbi"
vm_username_master 		= "dbi"
vm_username_worker1 		= "dbi"
vm_username_worker2 		= "dbi"
vm_username_worker3 		= "dbi"
vm_password 		= "YOUR-PASSWORD"

 

Next, we will configure the virtual network with 1 subnet for all hosts.

network.tf
resource "azurerm_resource_group" "terraform_rg" {
  name 		= "${var.resource_group_name}"
  location 	= "${var.location}"
}

resource "azurerm_virtual_network" "vnet" {
  name 			= "cdh-vnet"
  address_space 	= ["${var.vnet_cidr}"]
  location 		= "${var.location}"
  resource_group_name   = "${azurerm_resource_group.terraform_rg.name}"

  tags {
	group = "Coudera-Cluster"
  }
}

resource "azurerm_subnet" "subnet_1" {
  name 			= "Subnet-1"
  address_prefix 	= "${var.subnet1_cidr}"
  virtual_network_name 	= "${azurerm_virtual_network.vnet.name}"
  resource_group_name 	= "${azurerm_resource_group.terraform_rg.name}"
}

Next, we can create the storage account related to the resource group with a container.

storage.tf
resource "azurerm_storage_account" "storage" {
  name 			= "dbistorage1"
  resource_group_name 	= "${azurerm_resource_group.terraform_rg.name}"
  location 		= "${var.location}"
  account_tier    = "Standard"
  account_replication_type = "LRS"

  tags {
	group = "Coudera-Cluster"
  }
}

resource "azurerm_storage_container" "container" {
  name 			= "vhds"
  resource_group_name 	= "${azurerm_resource_group.terraform_rg.name}"
  storage_account_name 	= "${azurerm_storage_account.storage.name}"
  container_access_type = "private"
}

The next step will be to create a security group for all VM. We will implement 2 rules. Allow SSH and HTTP connection from everywhere, which it’s basically not really secure, but don’t forget that we are in a volatile testing infrastructure.

security_group.tf
resource "azurerm_network_security_group" "nsg_cluster" {
  name 			= "cdh-nsg"
  location 		= "${var.location}"
  resource_group_name 	= "${azurerm_resource_group.terraform_rg.name}"

  security_rule {
	name 			= "AllowSSH"
	priority 		= 100
	direction 		= "Inbound"
	access 		        = "Allow"
	protocol 		= "Tcp"
	source_port_range       = "*"
    destination_port_range     	= "22"
    source_address_prefix      	= "*"
    destination_address_prefix 	= "*"
  }

  security_rule {
	name 			= "AllowHTTP"
	priority		= 200
	direction		= "Inbound"
	access 			= "Allow"
	protocol 		= "Tcp"
	source_port_range       = "*"
    destination_port_range     	= "80"
    source_address_prefix      	= "Internet"
    destination_address_prefix 	= "*"
  }

  tags {
	group = "Coudera-Cluster"
  }
}

 Next we will create the private / public IP for our instances.

ip.tf
resource "azurerm_public_ip" "manager_pip" {
name = "cdh-pip"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
public_ip_address_allocation = "static"

tags {
group = "Coudera-Cluster"
}
}

resource "azurerm_network_interface" "public_nic" {
name = "manager_network_interface"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
network_security_group_id = "${azurerm_network_security_group.nsg_cluster.id}"

ip_configuration {
name = "manager_ip"
subnet_id = "${azurerm_subnet.subnet_1.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.manager_pip.id}"
}
tags {
group = "Coudera-Cluster"
}
}


resource "azurerm_public_ip" "master_pip" {
name = "cdh-pip1"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
public_ip_address_allocation = "static"

tags {
group = "Coudera-Cluster"
}
}

resource "azurerm_network_interface" "public_nic1" {
name = "master_network_interface"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
network_security_group_id = "${azurerm_network_security_group.nsg_cluster.id}"

ip_configuration {
name = "master_ip"
subnet_id = "${azurerm_subnet.subnet_2.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.master_pip.id}"
}
tags {
group = "Coudera-Cluster"
}
}


resource "azurerm_public_ip" "worker1_pip" {
name = "cdh-pip2"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
public_ip_address_allocation = "static"

tags {
group = "Coudera-Cluster"
}
}

resource "azurerm_network_interface" "public_nic2" {
name = "worker1_network_interface"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
network_security_group_id = "${azurerm_network_security_group.nsg_cluster.id}"

ip_configuration {
name = "worker1_ip"
subnet_id = "${azurerm_subnet.subnet_2.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.worker1_pip.id}"
}
tags {
group = "Coudera-Cluster"
}
}


resource "azurerm_public_ip" "worker2_pip" {
name = "cdh-pip3"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
public_ip_address_allocation = "static"

tags {
group = "Coudera-Cluster"
}
}

resource "azurerm_network_interface" "public_nic3" {
name = "worker2_network_interface"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
network_security_group_id = "${azurerm_network_security_group.nsg_cluster.id}"

ip_configuration {
name = "worker2_ip"
subnet_id = "${azurerm_subnet.subnet_2.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.worker2_pip.id}"
}
tags {
group = "Coudera-Cluster"
}
}


resource "azurerm_public_ip" "worker3_pip" {
name = "cdh-pip4"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
public_ip_address_allocation = "static"

tags {
group = "Coudera-Cluster"
}
}

resource "azurerm_network_interface" "public_nic4" {
name = "worker3_network_interface"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
network_security_group_id = "${azurerm_network_security_group.nsg_cluster.id}"

ip_configuration {
name = "worker3_ip"
subnet_id = "${azurerm_subnet.subnet_2.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.worker3_pip.id}"
}
tags {
group = "Coudera-Cluster"
}
}

Once the network, storage and the security group are configured we can now provision our VM instances with the following configuration:

  • 5 instances
  • Master and Manager VM size: Standard_DS3_v2
  • Worker VM size: Standard_DS2_v2
  • Centos 7.3
  • 1 OS disk + 1 data disk of 100GB
vm.tf
resource "azurerm_virtual_machine" "la_manager" {
  name                  = "cdh_manager"
  location              = "${var.location}"
  resource_group_name   = "${azurerm_resource_group.terraform_rg.name}"
  network_interface_ids = ["${azurerm_network_interface.public_nic.id}"]
  vm_size               = "Standard_DS3_v2"

#This will delete the OS disk automatically when deleting the VM
  delete_os_disk_on_termination = true

  storage_image_reference {
    publisher = "OpenLogic"
    offer     = "CentOS"
    sku       = "7.3"
    version   = "latest"

  }

  storage_os_disk {
    name          = "osdisk-1"
    vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/osdisk-1.vhd"
    caching       = "ReadWrite"
    create_option = "FromImage"
  }

  # Optional data disks
    storage_data_disk {
      name          = "data"
      vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/data1.vhd"
      disk_size_gb  = "100"
      create_option = "Empty"
      lun           = 0
    }

  os_profile {
    computer_name  = "manager"
    admin_username = "${var.vm_username_manager}"
    admin_password = "${var.vm_password}"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }

  tags {
    group = "Cloudera-Cluster"
  }
}


# Master (High Availability)
resource "azurerm_virtual_machine" "la_master" {
  name                  = "cdh_master"
  location              = "${var.location}"
  resource_group_name   = "${azurerm_resource_group.terraform_rg.name}"
  network_interface_ids = ["${azurerm_network_interface.public_nic1.id}"]
  vm_size               = "Standard_DS3_v2"

#This will delete the OS disk automatically when deleting the VM
  delete_os_disk_on_termination = true

  storage_image_reference {
    publisher = "OpenLogic"
    offer     = "CentOS"
    sku       = "7.3"
    version   = "latest"

  }

  storage_os_disk {
    name          = "osdisk-2"
    vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/osdisk-2.vhd"
    caching       = "ReadWrite"
    create_option = "FromImage"
  }

  # Optional data disks
    storage_data_disk {
      name          = "data"
      vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/data2.vhd"
      disk_size_gb  = "100"
      create_option = "Empty"
      lun           = 0
    }

  os_profile {
    computer_name  = "master"
    admin_username = "${var.vm_username_master}"
    admin_password = "${var.vm_password}"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }

  tags {
    group = "Cloudera-Cluster"
  }
}


# Worker 1
resource "azurerm_virtual_machine" "la_worker1" {
  name                  = "cdh_worker1"
  location              = "${var.location}"
  resource_group_name   = "${azurerm_resource_group.terraform_rg.name}"
  network_interface_ids = ["${azurerm_network_interface.public_nic2.id}"]
  vm_size               = "Standard_DS2_v2"

#This will delete the OS disk automatically when deleting the VM
  delete_os_disk_on_termination = true

  storage_image_reference {
    publisher = "OpenLogic"
    offer     = "CentOS"
    sku       = "7.3"
    version   = "latest"
  }

  storage_os_disk {
    name          = "osdisk-3"
    vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/osdisk-3.vhd"
    caching       = "ReadWrite"
    create_option = "FromImage"
  }

  # Optional data disks
    storage_data_disk {
      name          = "data"
      vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/data3.vhd"
      disk_size_gb  = "100"
      create_option = "Empty"
      lun           = 0
    }

  os_profile {
    computer_name  = "worker1"
    admin_username = "${var.vm_username_worker1}"
    admin_password = "${var.vm_password}"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }

  tags {
    group = "Cloudera-Cluster"
  }
}

# Worker 2
resource "azurerm_virtual_machine" "la_worker2" {
  name                  = "cdh_worker2"
  location              = "${var.location}"
  resource_group_name   = "${azurerm_resource_group.terraform_rg.name}"
  network_interface_ids = ["${azurerm_network_interface.public_nic3.id}"]
  vm_size               = "Standard_DS2_v2"

#This will delete the OS disk automatically when deleting the VM
  delete_os_disk_on_termination = true

  storage_image_reference {
    publisher = "OpenLogic"
    offer     = "CentOS"
    sku       = "7.3"
    version   = "latest"
  }

  storage_os_disk {
    name          = "osdisk-4"
    vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/osdisk-4.vhd"
    caching       = "ReadWrite"
    create_option = "FromImage"
  }

  # Optional data disks
    storage_data_disk {
      name          = "data"
      vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/data4.vhd"
      disk_size_gb  = "100"
      create_option = "Empty"
      lun           = 0
    }

  os_profile {
    computer_name  = "worker2"
    admin_username = "${var.vm_username_worker2}"
    admin_password = "${var.vm_password}"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }

  tags {
    group = "Cloudera-Cluster"
  }
}

# Worker 3
resource "azurerm_virtual_machine" "la_worker3" {
  name                  = "cdh_worker3"
  location              = "${var.location}"
  resource_group_name   = "${azurerm_resource_group.terraform_rg.name}"
  network_interface_ids = ["${azurerm_network_interface.public_nic4.id}"]
  vm_size               = "Standard_DS2_v2"

#This will delete the OS disk automatically when deleting the VM
  delete_os_disk_on_termination = true

  storage_image_reference {
    publisher = "OpenLogic"
    offer     = "CentOS"
    sku       = "7.3"
    version   = "latest"
  }

  storage_os_disk {
    name          = "osdisk-5"
    vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/osdisk-5.vhd"
    caching       = "ReadWrite"
    create_option = "FromImage"
  }

  # Optional data disks
    storage_data_disk {
      name          = "data"
      vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/data5.vhd"
      disk_size_gb  = "100"
      create_option = "Empty"
      lun           = 0
    }

  os_profile {
    computer_name  = "worker3"
    admin_username = "${var.vm_username_worker3}"
    admin_password = "${var.vm_password}"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }

  tags {
    group = "Cloudera-Cluster"
  }
}
 Execution

We can now execute the following command from a shell environment. Note that all files must be placed in the same directory.

[root@centos Terraform]# terraform plan -var-file=var_values.tfvars
[root@centos Terraform]# terraform apply -var-file=var_values.tfvars

 

After few minutes, check your resources in your Azure portal.

Screen Shot 2018-04-10 at 11.49.24

To destroy the infrastructure, run the following command:

[root@centos Terraform]# terraform destroy -var-file=var_values.tfvars

 

Once our infrastructure has been fully provisioned by Terraform we can start the installation of the Cloudera distribution of Hadoop with Ansible.

 

 

Cet article Deploy a Cloudera cluster with Terraform and Ansible in Azure – part 1 est apparu en premier sur Blog dbi services.

get 3 consecutive dates (based on sys date) in a single column and map it with another Table's column

Tom Kyte - Tue, 2018-04-10 00:46
Hi, I need to show Quantity sold for three consecutive days (yesterday, today, tomorrow). So, date should be the first column and Quantity being second. Desired Output -------------- Date Qty .... ... 8/4/2018 10 9/4/2018...
Categories: DBA Blogs

Initializing a PLSQL table of records

Tom Kyte - Tue, 2018-04-10 00:46
Tom, How do you initialize a PL/SQL table of records in the Declaration section of a PL/SQL block? In the following snippet, I can successfully initialize a normal scalar PL/SQL table but am unsuccessful initializing a table of records. Can it...
Categories: DBA Blogs

Oracle Reports 11g showing junk characters fron nvarchar2 column stored in Oracle database 12c

Tom Kyte - Tue, 2018-04-10 00:46
Dear Sir, Our Oracle 12c database and Weblogic 11g are hosted in SunOS 5.11 11.2 environment. The output from nls_database_parameters is given below: NLS_RDBMS_VERSION 12.1.0.2.0 NLS_NCHAR_CONV_EXCP ...
Categories: DBA Blogs

Export all table in a schema into csv files

Tom Kyte - Tue, 2018-04-10 00:46
Hello, I want to export all table in a schema into csv files. I want that the csv files having the same name as the tables. I have following SQLPlus Code: <code>connect username/password set serveroutput on; set lines 80 set head off...
Categories: DBA Blogs

Problem with large tables and LIKE query

Tom Kyte - Tue, 2018-04-10 00:46
Hi, we have an application that uses Oracle database to hold company data. One of it's table, called PWORKSPACEOBJECT holds all 'displayable' objects in GUI client. Most of the time users are searching for some data only by typing *some text* ...
Categories: DBA Blogs

Had to set parallel_max_servers to 2 for Delphix clone

Bobby Durrett's DBA Blog - Mon, 2018-04-09 17:50

This is a quick note about a problem I had creating a Delphix virtual database clone of a production database.

This is an older 11.1.0.7 HP-UX Itanium database. I tried to make a new virtual database copy of the production database and it failed due to the following error:

ORA-07445: exception encountered: core dump [krd_flush_influx_buffers()+96] [SIGSEGV] [ADDR:0x10000000005D8] [PC
:0x400000000B415880] [Address not mapped to object] []

I only found one thing on Oracle’s support site about krd_flush_influx_buffers but it was not an exact match because it had to do with Data Guard.

So I tried various parameters and none worked. I tried setting parallel_max_servers to 0  down from 100 in production but that caused other issues. Then I remembered something about setting it to 2 so I tried that and it worked.

The strange things is that I still see a ORA-07445 krd_flush_influx_buffers error in the alert log for the successful clone. But, somehow changing the parallel_max_servers parameter to 2 allowed the various Delphix and Oracle processes to complete.

Bobby

Categories: DBA Blogs

Configuring PeopleSoft to Meet Your Needs

PeopleSoft Technology Blog - Mon, 2018-04-09 13:15

Configuration has been a major area of investment for Peoplesoft lately.  We've added many new features in PeopleTools and Enterprise Components that enable you to configure your environment to meet your enterprise's unique requirements.  Configuration is a better option than customization because it makes changes easier and cheaper to manage.  You can take image updates from Oracle/Peoplesoft with much less impact and not have to worry about re-implementing your customizations.  Configuration also enables you to tailor the PeopleSoft user interface to align with your business.  It can also streamline navigation and make it easier for users to adopt the Fluid UI.  We've been presenting sessions at conferences and on webinars about configuration, and you'll see more information in the near future.  We've also created a page on peoplesoftinfo.com that discusses configuration. 

Visit this page to learn more about all the different configuration features and how they can be used.  Read success stories to learn about how some customers are taking advantage of these configuration features.  This Key Concepts page will be the place to go as more information is added.

Data Hashing

Jonathan Lewis - Mon, 2018-04-09 12:10

Here’s a little-known feature that has been around since at least Oracle 10, though I don’t think I had ever seen it in the wild until today when someone reported on the ODC (OTN) database forum that they had a problem getting repeatable results.  It’s always possible, of course, that failure to get repeatable results is the natural consequence of running queries against a multi-user system, but if we assume that this was not the cause in this case we have to ask why a special hashing function that Oracle supplies to allow you to check that a set of data hasn’t changed gives you different results when “the data hasn’t changed”.

I’m talking about the function dbms_sqlhash.gethash() – a packaged function that exists in the SYS schema and isn’t usually exposed to other users. The function takes as its inputs the text of query, a selected hashing function, and a “chunk” size. It will run the query, and use the hashing function to return a single, 16 – 64 byte, hash value representing the entire result set. Here’s an example of usage:


begin
        dbms_output.put_line(
                dbms_sqlhash.gethash(
                        sqltext     => 'select n1, d1 from t1 where id > 0',
                        digest_type => dbms_crypto.hash_md5
                        -- chunk_size  => 128*1048576   -- default 128MB
                )
        );
end;
/

6496D2438FECA960B1E916BF8C4BADCA

I haven’t specified a chunk size – the default is 128MB – and Oracle will hash this much of the result set in a single pass. If the result set is larger than this Oracle will hash each chunk in turn then generate a hash of the hash values. (This means, by the way, that changing the chunk size can change the hash value for large data sets).

There are 6 possible digest types in 12.1.0.2 (listed in the $ORACLE_HOME/rdbms/admin/dbmsobtk.sql script that creates the dbms_crypto package – so you will need the execute privilege on both dbms_sqlhash and dbms_crypto to use the function if you want to code with symbolic constants):

rem         HASH_MD4           CONSTANT PLS_INTEGER            :=     1;
rem         HASH_MD5           CONSTANT PLS_INTEGER            :=     2;
rem         HASH_SH1           CONSTANT PLS_INTEGER            :=     3;
rem         HASH_SH256         CONSTANT PLS_INTEGER            :=     4;
rem         HASH_SH384         CONSTANT PLS_INTEGER            :=     5;
rem         HASH_SH512         CONSTANT PLS_INTEGER            :=     6;

Let’s put the whole thing into a demonstration that will allow us to see an important point – you have to be careful with your query:


rem
rem     Script:         gethash.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2016
rem

execute dbms_random.seed(0)

create table t1
nologging
as
select
        1e4 - rownum                    id,
        trunc(dbms_random.value(0,100)) n1,
        trunc(sysdate)                  d1,
        lpad('x',100,'x')               padding
from
        dual
connect by
        level <= 1e4 -- > comment to avoid WordPress format issue
;

alter table t1 add constraint t1_pk primary key (id);

begin
        dbms_stats.gather_table_stats(
                ownname     => user,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );
end;
/

set feedback off

alter system flush shared_pool;
alter session set optimizer_mode = first_rows_1;

begin
        dbms_output.put_line(
                dbms_sqlhash.gethash(
                        sqltext     => 'select n1, d1 from t1 where id > 0',
                        digest_type => dbms_crypto.hash_md5
                        -- chunk_size  => 128*1048576   -- default 128MB
                )
        );
end;
/

alter system flush shared_pool;
alter session set optimizer_mode = all_rows;

begin
        dbms_output.put_line(
                dbms_sqlhash.gethash(
                        sqltext     => 'select n1, d1 from t1 where id > 0',
                        digest_type => dbms_crypto.hash_md5
                        -- chunk_size  => 128*1048576   -- default 128MB
                )
        );
end;
/

alter system flush shared_pool;
alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';

begin
        dbms_output.put_line(
                dbms_sqlhash.gethash(
                        sqltext     => 'select n1, d1 from t1 where id > 0',
                        digest_type => dbms_crypto.hash_md5
                        -- chunk_size  => 128*1048576   -- default 128MB
                )
        );
end;
/

alter session set nls_date_format='DD-MON-RR';

I’ve created a data set, added a primary key, and gathered stats, then I’ve called the same hashing function on the same sql statement three times in a row. However, I’ve changed the session environment for each call – in the first case I’ve set the optimizer to “first rows(1)” optimization, then I’ve set the optimizer back to all_rows, then I’ve changed the nls_date_format from its default of “DD-MON-RR” (and that’s significant because I’ve got a date column in my query). Here are the output from running the script:


Table created.


Table altered.


PL/SQL procedure successfully completed.

6496D2438FECA960B1E916BF8C4BADCA
D41D4A2945D0B89A6C5DEB5060189A54
ECC3D2B66CB61821397CD9BD983FD5F4

The query has to return the same data content in all three cases – but the hash value is different in the three cases. The change in the optimizer mode has affected the order in which the data was returned (with first_rows(1) Oracle did a full scan of the primary key index, with all_rows it did a tablescan and sort); the change in the nls_XXX parameter meant the internal representation of the data changed.

You have to be very careful with dbms_sqlhash every time you use it if you want the same data set to produce the same result. First, to be safe, you need to ensure that you always use the same NLS parameters when using the function; then you need to have an “order by” clause in the query, and the columns used in the order by clause need to a possible candidate key (i.e. unique, not null) for the data otherwise a change in the optimizer parameters, or the object stats, could result in a change in execution plan with an ensuing change in the actual order of the data and a different hash value.

 

Dbvisit Standby Archive Log Daemon

Yann Neuhaus - Mon, 2018-04-09 07:27

Dbvisit Standby version 8 comes with a nice feature, a daemon, which gives the benefit to send and apply the archive log automatically in the background. Bypassing the system scheduling, the daemon will facilitate customer RPO (Recovery Point Objective) and RTO (Recovery Time Objective) fine tuning. Monitoring to apply logs to the Standby only when needed, will also optimize use of resources. Originally available for the Linux based environments, the feature has been made available for the Windows based platforms starting 8.0.06. This blog will cover its implementation and show its benefit.

Demo databases environments have been easily managed thanks to DBI DMK tool.

Environment
DBVP : Primary Server
DBVS : Standby Server
DBVPDB_SITE1 : Primary database
DBVPDB_SITE2 : Physical Standby database

 

Daemon start/stop/status
oracle@DBVP:/home/oracle/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D start
Starting Dbvisit Daemon...
Started successfully.

oracle@DBVP:/home/oracle/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D status
Dbvisit Daemon process is running with pid 11546.

oracle@DBVP:/home/oracle/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D stop
Stopping Dbvisit Daemon...
Successfully stopped.

 

Automatic startup
In order to start the daemon automatically at boot,  and easily manage its status, we will create a dbvlogdaemon Service.
[root@DBVP ~]# vi /etc/systemd/system/dbvlogdaemon.service

[root@DBVP ~]# cat /etc/systemd/system/dbvlogdaemon.service
[Unit]
Description=DB Visit log daemon Service
After=oracle.service

[Service]
Type=simple
RemainAfterExit=yes
User=oracle
Group=oinstall
Restart=always
ExecStart=/u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D start
ExecStop=/u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D stop

[Install]
WantedBy=multi-user.target

[root@DBVP ~]# chmod 644 /etc/systemd/system/dbvlogdaemon.service

[root@DBVP ~]# systemctl daemon-reload

[root@DBVP ~]# systemctl enable dbvlogdaemon.service

Of course this would not avoid impact in case of daemon crash which could be simulated with a kill command.

 

Check running daemon
oracle@DBVP:/u01/app/dbvisit/standby/ [DBVPDB] ps -ef | grep dbvctl | grep -v grep
oracle    4299     1  0 08:25 ?        00:00:02 /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D start

oracle@DBVP:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB_SITE1 -D status
Dbvisit Daemon process is running with pid 4299.

oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ps -ef | grep dbvctl | grep -v grep
oracle    4138     1  0 08:25 ?        00:00:01 /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D start

oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB_SITE1 -D status
Dbvisit Daemon process is running with pid 4138.

 

Daemon Parameter
# DMN_DBVISIT_INTERVAL     - interval in sec for dbvisit schedule on source
# DMN_MONITOR_INTERVAL     - interval in sec for log monitor schedule on source
# DMN_DBVISIT_TIMEOUT      - max sec for a dbvisit process to complete on source
# DMN_MONITOR_TIMEOUT      - max sec for a monitor process to complete on source
# DMN_MONITOR_LOG_NUM      - number of logs to monitor on source
# DMN_MAX_FAIL_NOTIFICATIONS - max number of emails sent on failure on source
# DMN_BLACKOUT_STARTTIME   - blackout window start time HH:MI on source
# DMN_BLACKOUT_ENDTIME     - blackout window end time HH:MI on source
# DMN_DBVISIT_INTERVAL_DR  - interval in sec for dbvisit schedule on destination
# DMN_MONITOR_INTERVAL_DR  - interval in sec for log monitor schedule on destination
# DMN_DBVISIT_TIMEOUT_DR   - max sec for a dbvisit process to complete on destination
# DMN_MONITOR_TIMEOUT_DR   - max sec for a monitor process to complete on destination
# DMN_MONITOR_LOG_NUM_DR   - number of logs to monitor on destination
# DMN_MAX_FAIL_NOTIFICATIONS_DR - max number of emails sent on failure on destination
# DMN_BLACKOUT_STARTTIME_DR- blackout window start time HH:MI on destination
# DMN_BLACKOUT_ENDTIME_DR  - blackout window end time HH:MI on destination

With the daemon, we can pause the archive send/apply process using the DMN_BLACKOUT parameters.

To setup our lab we will act on the most important parameters :
  • DMN_MONITOR_INTERVAL  (Primary) and DMN_MONITOR_INTERVAL_DR (Standby).
    The Monitor Interval will give the frequency for Dbvisit to check for new archive log and only act if existing.
  • DMN_DBVISIT_INTERVAL (Primary) and DMN_DBVISIT_INTERVAL_DR (Standby)
    The Dbvisit Interval will give the frequency for Dbvisit to force a send/apply process. This action will be dependent of the LOGSWITCH DCC parameter. Recommendation is not to go below 5 minutes.
oracle@DBVP:/oracle/u01/app/dbvisit/standby/conf/ [DBVPDB] pwd
/oracle/u01/app/dbvisit/standby/conf

oracle@DBVP:/u01/app/dbvisit/standby/conf/ [DBVPDB] egrep 'DMN_DBVISIT_INTERVAL|DMN_MONITOR_INTERVAL' dbv_DBVPDB_SITE1.env
# DMN_DBVISIT_INTERVAL     - interval in sec for dbvisit schedule on source
# DMN_MONITOR_INTERVAL     - interval in sec for log monitor schedule on source
# DMN_DBVISIT_INTERVAL_DR  - interval in sec for dbvisit schedule on destination
# DMN_MONITOR_INTERVAL_DR  - interval in sec for log monitor schedule on destination
DMN_DBVISIT_INTERVAL = 300
DMN_MONITOR_INTERVAL = 60
DMN_DBVISIT_INTERVAL_DR = 300
DMN_MONITOR_INTERVAL_DR = 60

 

The LOGSWITCH parameter determines if a database log switch (alter system switch logfile) should be trigger at Dbvisit execution.
N (default value) : Only if there are no new archive logs to transfer.
Y : At every execution, independently of the archive log creation.
I(Ignore) : Never. To be use with caution.

A daemon restart is mandatory post DDC configuration file updates.
[root@DBVP ~]# service dbvlogdaemon stop
Redirecting to /bin/systemctl stop dbvlogdaemon.service
[root@DBVP ~]# service dbvlogdaemon start
Redirecting to /bin/systemctl start dbvlogdaemon.service

[root@DBVS ~]# service dbvlogdaemon stop
Redirecting to /bin/systemctl stop dbvlogdaemon.service
[root@DBVS ~]# service dbvlogdaemon start
Redirecting to /bin/systemctl start dbvlogdaemon.service

 

Send and apply archive log demo
Get current date and primary current sequence.
SQL> select sysdate from dual;

SYSDATE
-------------------
2018/03/28 12:30:50

SQL> select max(sequence#) from v$log;

MAX(SEQUENCE#)
--------------
           179

Generate a Dbvisit gap report.
oracle@DBVP:/u01/app/dbvisit/standby/conf/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.16_0_g4e0697e2) (pid 21393)
dbvctl started on DBVP: Wed Mar 28 12:30:57 2018
=============================================================

Dbvisit Standby log gap report for DBVPDB_SITE1 thread 1 at 201803281230:
-------------------------------------------------------------
Destination database on DBVS is at sequence: 178.
Source database on DBVP is at log sequence: 179.
Source database on DBVP is at archived log sequence: 178.
Dbvisit Standby last transfer log sequence: 178.
Dbvisit Standby last transfer at: 2018-03-28 12:29:14.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:01:27.


=============================================================
dbvctl ended on DBVP: Wed Mar 28 12:31:06 2018
=============================================================

No archive log needs to be send and apply on the standby. Both databases are in sync.


Generate logfile switch
SQL> alter system switch logfile;

System altered.

SQL> alter system switch logfile;

System altered.

SQL> alter system switch logfile;

System altered.

Check current date and primary database current sequence.
SQL> select sysdate from dual;

SYSDATE
-------------------
2018/03/28 12:31:29

SQL> select max(sequence#) from v$log;

MAX(SEQUENCE#)
--------------
           182

Generate new Dbvisit gap reports.
oracle@DBVP:/u01/app/dbvisit/standby/conf/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.16_0_g4e0697e2) (pid 21454)
dbvctl started on DBVP: Wed Mar 28 12:31:38 2018
=============================================================

Dbvisit Standby log gap report for DBVPDB_SITE1 thread 1 at 201803281231:
-------------------------------------------------------------
Destination database on DBVS is at sequence: 178.
Source database on DBVP is at log sequence: 182.
Source database on DBVP is at archived log sequence: 181.
Dbvisit Standby last transfer log sequence: 178.
Dbvisit Standby last transfer at: 2018-03-28 12:29:14.

Archive log gap for thread 1:  3.
Transfer log gap for thread 1: 3.
Standby database time lag (DAYS-HH:MI:SS): +00:02:27.


=============================================================
dbvctl ended on DBVP: Wed Mar 28 12:31:47 2018
=============================================================
We can see that we have 3 new archive logs to transfer and to apply on the standby.
There is a 3 sequences lag between both databases.

oracle@DBVP:/u01/app/dbvisit/standby/conf/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.16_0_g4e0697e2) (pid 21571)
dbvctl started on DBVP: Wed Mar 28 12:32:19 2018
=============================================================
Dbvisit Standby log gap report for DBVPDB_SITE1 thread 1 at 201803281232:
-------------------------------------------------------------
Destination database on DBVS is at sequence: 178.
Source database on DBVP is at log sequence: 182.
Source database on DBVP is at archived log sequence: 181.
Dbvisit Standby last transfer log sequence: 181.
Dbvisit Standby last transfer at: 2018-03-28 12:32:13.
Archive log gap for thread 1:  3.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:02:27.
=============================================================
dbvctl ended on DBVP: Wed Mar 28 12:32:27 2018
=============================================================
3 archive logs has been automatically transferred by the daemon to the standby in the next minute.

oracle@DBVP:/u01/app/dbvisit/standby/conf/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.16_0_g4e0697e2) (pid 21679)
dbvctl started on DBVP: Wed Mar 28 12:33:00 2018
=============================================================

Dbvisit Standby log gap report for DBVPDB_SITE1 thread 1 at 201803281233:
-------------------------------------------------------------
Destination database on DBVS is at sequence: 181.
Source database on DBVP is at log sequence: 182.
Source database on DBVP is at archived log sequence: 181.
Dbvisit Standby last transfer log sequence: 181.
Dbvisit Standby last transfer at: 2018-03-28 12:32:13.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:01:13.


=============================================================
dbvctl ended on DBVP: Wed Mar 28 12:33:09 2018
=============================================================

Another minute later the standby daemon applied the new archive logs. Both databases are on sync.

 

Conclusion

Dbvisit new daemon feature is adding real flexibility in sending and applying archive logs, and help improving customer RPO and RTO. We still might want to keep a daily crontab gap report with email to be sent to a DBA team. This will ensure to monitor daemon keep alive.

Logswitch and sending archive logs to standby consumes real system resource. Dbvisit daemon will also help fine tuning the use of the resource.

Note that the daemon processes must be restarted after each daylight saving clock change.

 

Cet article Dbvisit Standby Archive Log Daemon est apparu en premier sur Blog dbi services.

Fanatics Turns Customers into Even Bigger Fans with Oracle CX Cloud Suite

Oracle Press Releases - Mon, 2018-04-09 07:00
Press Release
Fanatics Turns Customers into Even Bigger Fans with Oracle CX Cloud Suite Global leader in licensed sports merchandise turns to Oracle to custom build great fan experiences

ORACLE MODERN CUSTOMER EXPERIENCE, Chicago, IL—Apr 9, 2018

Fanatics, the world’s largest online retailer of sports merchandise, has selected Oracle Customer Experience (CX) Cloud suite to help change the way fans purchase their favorite team apparel and jerseys across retail channels. With Oracle Service Cloud and Oracle Marketing Cloud, Fanatics has been able to transform the way it engages with its customers across channels and has launched innovative new customer service initiatives, including its “Jersey Assurance” program.     

Fanatics offers the largest collection of timeless and timely merchandise for sports fans shopping online, on a phone, in stores, in stadiums or on-site at the world’s biggest sporting events. To continue to provide a truly world-class experience to its customer base of passionate sports fans, Fanatics needed to rethink its customer support organization to empower its customer service “athletes” (agents) and “coaches” (supervisors). After evaluating different solutions, Fanatics selected Oracle Service Cloud to modernize its contact centers and launch its new “Athletes Solutions Kiosk” and Oracle Marketing Cloud to get closer to fans.

“One of the things about Fanatics that is core to our business is the Fan Experience and we wanted to make sure that whenever fans contacted us, whether it’s through voice, chat or email, that we would be able to give them a really world-class experience,” said Carolyne Matseshe-Crawford, vice president, fan experience, Fanatics. “We had a very talented group of people, but they were working independently in silos and we knew that had to change. With Oracle, we have been able to bring our team together and deliver a tailored experience so that whenever or however fans contact us, it feels like they are communicating with a friend.”   

Oracle Service Cloud has enabled Fanatics to break down legacy customer service silos and take advantage of a unified, omni-channel service solution that combines web, social and contact center experiences. With the new Oracle powered Athletes Solutions Kiosk, Fanatics has been able to reduce the time required to resolve fans’ questions, drive consistent engagement across channels and provide a more personalized and friendly fan experience.

In addition, Fanatics has been able to launch a new “Jersey Assurance” program that offers fans the opportunity for a free replacement jersey, if the active pro player switches teams within 90 days of purchase. Oracle Marketing Cloud has enabled Fanatics to further personalize the fan experience by tracking and analyzing fan behavior across channels and devices to create tailored, engaging and seamless fan experiences.  

“Sports fans have an emotional bond with their teams and they expect that passion to always be understood and respected,” said Stephen Fioretti, vice president for CX engagement solutions, Oracle. “That expectation raises the bar for customer experience professionals, but the Fanatics team easily meets it as they are incredibly passionate about sports, and everything they do focuses on the customer. With Oracle Service Cloud and Oracle Marketing Cloud, Fanatics now has the technology needed to break down the barriers that were getting in its way and deliver the best possible customer service across channels.”

Oracle Service Cloud and Oracle Marketing Cloud are part of Oracle Customer Experience (CX) Cloud Suite, which empowers organizations to take a smarter approach to customer experience management and business transformation initiatives. By providing a trusted business platform that connects data, experiences and outcomes, Oracle CX Cloud Suite helps customers reduce IT complexity, deliver innovative customer experiences and achieve predictable and tangible business results.

For additional information about Oracle CX, follow @OracleCX on Twitter, LinkedIn and Facebook or visit SmarterCX.com.

Contact Info
Kimberly Guillon
Oracle
209.601.9152
kim.guillon@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

About Fanatics

As the global leader in licensed sports merchandise, Fanatics is changing the way fans purchase their favorite team apparel and jerseys through an innovative, tech-infused approach to making and selling fan gear in today’s on-demand culture. Powering multichannel commerce for the world’s biggest sports brands, Fanatics offers the largest collection of timeless and timely merchandise whether shopping online, on your phone, in stores, in stadiums or on-site at the world’s biggest sporting events.

The company powers the Fanatics, FansEdge, Kitbag and Majestic brands, while also offering the largest selection of sports collectibles and memorabilia through Fanatics Authentic. A multi-faceted, mobile-first company, Fanatics operates more than 300 online and offline stores, including the e-commerce business for all major professional sports leagues (NFL, MLB, NBA, NHL MLS, NASCAR, PGA), major media brands (NBC Sports, CBS Sports, FOX Sports) and more than 200 collegiate and professional team properties, which include all MLS teams and several global soccer clubs.

In addition to e-commerce, the company’s capabilities include multichannel-integrated event and team retail across all leagues and major events around the world, such as the Kentucky Derby, Ryder Cup, NFL games in London and NHL’s Winter Classic; international capabilities through its Fanatics International division that provides a global sports retail platform; and an in-house merchandise division that is a licensed partner of all the major sports leagues and helps fans express passion through a broad range of styles, designs and jerseys created under both the Fanatics and Majestic brands. Fanatics’ vertical manufacturing engine is built for the on-demand economy and brings much-needed agility to the industry, better servicing today’s passionate sports fans and their growing real-time expectations with more unique and innovative products readily available across retail channels.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kimberly Guillon

  • 209.601.9152

ORACLE SQL Question sum credits and debits

Tom Kyte - Mon, 2018-04-09 06:26
I have a transaction table called TRANSACTION DETAIL.... TRANSACTION DETAIL CNN TYPE AMT DATE C1 C 1000 10-Jan-16 C2 C 1200 10-Jan-16 C3 C 2000 11-Jan-16 C4 D 1000 12-Jan-16 C3 D...
Categories: DBA Blogs

Unable to get expdp/impdp utility run successfully, getting ORA-39002: invalid operation ORA-39070: Unable to open the log file.

Tom Kyte - Mon, 2018-04-09 06:26
Dear Tom, very Good day to you. I am trying to use expdp/impdp utility to backup tables,schema etc and it is not executing successfully. The followings are what I am getting in regard to errors <code>ORA-39002: invalid operation ORA-39070: Un...
Categories: DBA Blogs

Where is the web address of the "Partition" introduction details on AskTom ?

Tom Kyte - Mon, 2018-04-09 06:26
Hello, AskTom Team. A few months ago, I have glanced a link about "<b><i>Partition</i></b>" introduction details. But now I haven't seen it. Please tell me what web address is. Best Regards Quanwen Zhao
Categories: DBA Blogs

Why does the year to month interval return an erroneous value compared to the day to second interval?

Tom Kyte - Mon, 2018-04-09 06:26
Hi Tom, While I was working through an example in a book, I ran into a problem I couldn't figure out. It involves two scripts and its results which are provided below: Script 1: <code> SELECT loan_id,due_date, tool_out_date, NUMTOYMINTERVAL...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator