Feed aggregator

In order to connect to 12C DB, do we have to change our client version to 11.2.0.3(or above)?

Tom Kyte - Fri, 2018-04-13 02:06
Hi Team, We've installed a latest Oracle Database 12C Release 2 on OEL6.8. And found that we could not connect to our DB unless using client which version is above 11.2.0.3(included)! Here's our senario: First, we tried to connect to DB using 11...
Categories: DBA Blogs

Three Quick Tips API Platform CS - Gateway Installation (Part 3)

OTN TechBlog - Thu, 2018-04-12 16:09

The part 2 of the series can be accessed here. Today, we keep it short and simple, here are three troubleshooting tips for Oracle API CS Gateway Installation:

  • If while running the "install" action, you see an output as something like:

           -bash-4.2$ ./APIGateway -f gateway-props.json -a install-configure-start-join
Please enter user name for weblogic domain,representing the gateway node:
weblogic
Password:
2018-03-22 17:33:20,342 INFO action: install-configure-start-join
2018-03-22 17:33:20,342 INFO Initiating validation checks for action: install.
2018-03-22 17:33:20,343 WARNING Previous gateway installation found at directory = /u01/oemm
2018-03-22 17:33:20,343 INFO Current cleanup action is CLEAN
2018-03-22 17:33:20,343 INFO Validation complete
2018-03-22 17:33:20,343 INFO Action install is starting
2018-03-22 17:33:20,343 INFO start action: install
2018-03-22 17:33:20,343 INFO Clean started.
2018-03-22 17:33:20,345 INFO Logging to file /u01/oemm/logs/main.log
2018-03-22 17:33:20,345 INFO Outcomes of operations will be accumulated in /u01/oemm/logs/status.log
2018-03-22 17:33:20,345 INFO Clean finished.
2018-03-22 17:33:20,345 INFO Installing Gateway
2018-03-22 17:33:20,718 INFO complete action: install isSuccess: failed detail: {}
2018-03-22 17:33:20,718 ERROR Action install has failed. Detail: {}
2018-03-22 17:33:20,718 WARNING Full-Setup execution incomplete. Please check log file for more details
2018-03-22 17:33:20,719 INFO Execution complete.

  The issue could be "/tmp" directory permissions. Please check that the tmp directory which is used by default by the OUI installer is not setup with "noexec","nosuid" or "nodev". Please check for other permission issues as well, Another possible area to investigate is the size allocated to "/tmp"  file system (should be greater than equal to 10 GB).

  • If sometime during running any of the installer actions, you get an "Invalid JSON object: .... " error, then, please check if the gateway-master.props file is not empty.  This can happen if for example you execute "ctrl+z" to exit an installer action. The best approach is to backup the gateway-master.json file and replace it, in case the above error happens. In worst case copy the gateway-master .
  •  If the "start" action is unable to start the managed server but the admin server starts Ok, then try changing the "publishAddress" property's value  to "listenIpAddress" property's value and try install,configure and start again. In other words "publishAddress"  = "listenIpAddress".

That is all for now, we will back soon with more.

 

 

 

Oracle VM Server x86: Creation of a virtual machine

Dietrich Schroff - Thu, 2018-04-12 14:47
After all these steps
it is possible to create a VM:

 Click the third icon:












And here is the summary:
One thing still missing: I did not put an ISO image into the repository, so this VM has no media to start from. The Import of an ISO image i will post in a week.

Businesses Struggle to Protect Sensitive Cloud Data According to New Oracle and KPMG Cloud Threat Report

Oracle Press Releases - Thu, 2018-04-12 13:18
Press Release
Businesses Struggle to Protect Sensitive Cloud Data According to New Oracle and KPMG Cloud Threat Report Despite defined security policies, eight in 10 organizations worry about employee compliance and four in 10 say detecting and responding to cloud security incidents is a top cyber security challenge

REDWOOD SHORES, Calif. and NEW YORK—Apr 12, 2018

In a recent survey of 450 global IT professionals conducted by Oracle (NYSE: ORCL) and KPMG LLP, results show that organizations are struggling to protect their data amidst a growing number of security breaches. The Oracle and KPMG Cloud Threat Report, 2018 found that 90 percent of information security professionals classify more than half of their cloud data as sensitive. Furthermore, 97 percent have defined cloud-approval policies, however, the vast majority (82 percent) noted they are concerned about employees following these policies.

For enterprises storing sensitive data in the cloud, an enhanced security strategy is key to monitoring and protecting that data. In fact, 40 percent of respondents indicate that detecting and responding to cloud security incidents is now their top cyber security challenge. As part of apparent efforts to address this challenge, four in 10 companies have hired dedicated cloud security architects, while 84 percent are committed to using more automation to effectively defend against sophisticated attackers.

“As organizations expand their cloud footprint, traditional security measures are unable to keep up with the rapid growth of users, applications, data, and infrastructure,” said Akshay Bhargava, vice president, Cloud Business Group, Oracle. “Autonomous security is critical when adopting more cloud services to easily deploy and manage integrated policies that span hybrid and multi-cloud environments. By using machine learning, artificial intelligence and orchestration, organizations can more quickly detect and respond to security threats, and protect their assets.”

“The pace of innovation and change in business strategies today necessitate flexible, cost-effective, cloud-based solutions,” said Tony Buffomante, U.S. Leader of KPMG LLP’s Cyber Security Services. “As many organizations migrate to cloud services, it is critical that their business and security objectives align, and that they establish rigorous controls of their own, versus solely relying on the cyber security measures provided by the cloud vendor.”

Additional Key Findings
  • Changing threat landscape poses challenges: Only 14 percent surveyed are able to effectively analyze and respond to the vast majority (75-100 percent) of their security event data.
  • Cyber security spending on the rise: 89 percent surveyed expect their organization to increase cyber security investments in the next fiscal year.
  • Inconsistency in cloud policies: 26 percent cited a lack of unified policies across disparate infrastructure as a top challenge.
  • Rethinking cloud strategies and providers in the face of changing regulations: General Data Protection Regulation (GDPR) will impact cloud strategies and service provider choices, according to 95 percent of respondents who must comply.
  • Mobile users are creating identity and access management (IAM) challenges for organizations: 36 percent said mobile device and application use make IAM controls and monitoring more difficult.
  • Automation can help: 29 percent surveyed are using machine learning on a limited basis, 18 percent do so extensively, and another 24 percent are now adding machine learning to existing security tools.
 

To find out more about the Oracle and KPMG Cloud Threat Report, 2018, visit Oracle at the RSA Conference, April 16–20 in San Francisco. (Booth #1115—Moscone South)

About the Report

The data in the Oracle and KPMG Cloud Threat Report, 2018 is based on a survey of 450 cyber security and IT professionals from private and public-sector organizations in North America (United States and Canada), Western Europe (United Kingdom), and Asia (Australia, Singapore).

Contact Info
Jesse Caputo
Oracle
+1.650.506.5967
jesse.caputo@oracle.com
Michael Rudnick
KPMG LLP
+1.201.307.7398
mrudnick@kpmg.com
Kristen Morgan
KPMG LLP
+1.410.949.2668
kmorgan@kpmg.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About KPMG LLP

KPMG LLP, the audit, tax and advisory firm (www.kpmg.com/us), is the independent U.S. member firm of KPMG International Cooperative ("KPMG International"). KPMG International’s independent member firms have 197,000 professionals working in 154 countries. KPMG International has been named a Leader in the Forrester Research Inc. report, The Forrester Wave™, Information Security Consulting Services, Q3 2017. Learn more at www.kpmg.com/us. Some or all of the services described herein may not be permissible for KPMG audit clients and their affiliates. 

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Jesse Caputo

  • +1.650.506.5967

Michael Rudnick

  • +1.201.307.7398

Kristen Morgan

  • +1.410.949.2668

Updated: Creating Custom EBS 12.2 Applications White Paper

Steven Chan - Thu, 2018-04-12 13:01

The least invasive method of fine-tuning your E-Business Suite environment to your users' needs is via personalizations.  If that approach is insufficient, you can create custom applications via Forms, Reports, and other technologies that segregate your customizations from the standard functionality provided with Oracle E-Business Suite.

We have updated our white paper that describes steps for creating these custom application extensions:

Based on your feedback via Oracle Support, we've expanded various sections with additional background. Please feel free to comment on your experiences with this documentation or by logging a Service Request with Oracle Support.

Related Articles

 

Categories: APPS Blogs

Data index inside compressed index for cardinality calculation?

Tom Kyte - Thu, 2018-04-12 07:46
I have been wondering why Oracle 11 does not follow explained plan for subquery execution with cost based optimiser. It selects other index for execution which causes very slow performance 34M row table. Table got 900 000 rows in seven weeks. Pro...
Categories: DBA Blogs

“Retail 2018: The Loyalty Divide” Reveals Brands Underestimate the Opportunity for Social Advocacy and Personalization to Drive Continued Revenue

Oracle Press Releases - Thu, 2018-04-12 07:00
Press Release
“Retail 2018: The Loyalty Divide” Reveals Brands Underestimate the Opportunity for Social Advocacy and Personalization to Drive Continued Revenue Oracle Retail Report Highlights Four Consumer Typologies and the Role of Emerging Generations and Technology in Creating More Meaningful Consumer Interactions

Oracle Industry Connect—New York, NY.—Apr 12, 2018

Oracle today announced the findings of a global study titled “Retail 2018: The Loyalty Divide” auditing consumer perceptions and brand realities of loyalty programs and influences. “Retail 2018: The Loyalty Divide” reveals that retailers are out of touch with consumers that demand more personalized experiences and discover brands and affirm purchase decisions through social influencers.

“In our primary research, we have uncovered a disparity between consumer and retailer expectations. Retailers put significant focus on transactional activity metrics and less focus on emerging behavioral expressions of loyalty. We found that retailers are overly confident in their ability to deliver relevant incentives and consumers are demanding more personalized engagement,” said Mike Webster, senior vice president and general manager, Oracle Retail and Hospitality. “Retailers need to take a critical eye at the culture of shoppers that only engage based on convenience and price. Social influence brings an additional dynamic for retailers to navigate the loyalty paradigm as they reward brand advocacy and feed enthusiasts content to affirm their purchases.”

The Oracle study was conducted in February 2018 among 13,000 consumers and 500 operators across retail, hotels and restaurants in five key regions: EMEA (France, Germany, UK and India), North America (USA), Latin America (Brazil and Mexico) and JAPAC (Australia and China).

The Great Divide

While 58 percent of retailers believe that consumers are eager to sign up to every loyalty program, 50 percent of consumers are much more selective only signing up to select, relevant programs and 19 percent of consumers rarely joining loyalty programs. Relevancy of loyalty incentives also further highlights the divide between brands and consumers: 58 percent of retailers believe their offers are mostly relevant compared to 32 percent of consumers believing those brand offers are relevant. Retailers continue to underestimate the impact of social influencers with only 45 percent of brands collaborating with influencers while consumers are more likely to trust brands reviewed by YouTubers (48 percent) and brands mentioned on social media (45 percent).

Navigating the New Loyalty Paradigm The Future of Loyalty

Despite the great divide, the future of loyalty is promising with younger demographics having a higher propensity to join loyalty programs and note their loyalty is growing. However a majority of retailers have a skewed focus on measuring loyalty with intrinsic values like brand perception and purchase history.

  • 44 percent of millennials (25-34) and 43 percent of pre-millennials (18 to 24) note they are more loyal to brands than they were five years ago
  • Baby boomers (55+) are discerning when it comes to signing up for programs with 56 percent of respondents noting they will only sign up to select, relevant programs

The Rise of Social Advocacy

Consumers have clearly indicated the necessity for retailer’s to have a strong social presence and the importance of social influencers in discovering new brands and affirming purchases.

  • 53 percent of consumers are likely to research brands on social media before buying
  • 46 percent are likely to save ideas on social media about products or retailers
  • 43 percent are likely to share photos of retail experiences/products on social media
  • 43 percent are likely to follow influencers that post about favorite retail brands
  • 41 percent of consumers agree that YouTube reviews are more trustworthy than branded advertising or communications
  • 37 percent of consumers agree that retailers used and recommended by social media influencers are more trustworthy than those recommended by celebrities
  • Despite this trend, 28 percent of retailers will only take into account measures of loyalty based on activities such as loyalty card membership or transaction frequency

Personalization: Connected and Immediate

For brands to remain relevant they need to create loyalty programs that recognize consumers as individuals with a level of service that goes beyond the traditional brand experience.

  • Connected

    • 69 percent of consumers note personalized offers based on stated preferences as appealing

    • 66 percent of consumers note personalized offers based on purchase history as appealing

    • 58 percent of consumers note personalized content and communications as appealing

  • Immediate

    • 82 percent of consumers note store proximity is most important in driving loyalty

    • 74 percent of consumers note immediate benefits are more appealing than accumulating points

    • 72 percent of consumers note an effortless loyalty program with automatic offers is appealing

“Retailers are heavily invested. The future of loyalty will be a balancing act between consumers desire for more anonymity, or at least direct control of their data, and an expectation for meaningful personalization that is targeted and timely,” said Webster. “Oracle Retail is helping our customers defend the right to be forgotten and pivot to earn the right to be remembered.  We believe the answer is a new approach to segmentation that integrates advanced algorithms and machine learning into the retail business process that govern planning, inventory and pricing.”

The Right to Data and the Role of Technology

Both consumers and retailers recognize that technology has a key role in driving connection and convenience in loyalty programs. Retailers, however, need to walk a fine line between enabling deeper connections without being invasive.

  • 30 percent of consumers find receiving recommendations for products or brands based on social influencers subscribed to or followed as unappealing compared to 90 percent of retailers that think this would be appealing to consumers
  • 26 percent of consumers find artificial intelligence on a mobile device that gets to know the user through voice recognition and is able to make intelligent recommendations based on this as unappealing compared to 91 percent of retailers that think this would be appealing to consumers
  • 91 percent of consumers note being able to accept or reject offers so that the retailer loyalty program can learn what products and offers are of most interest as appealing
  • 86 percent of consumers note a personalized experience with retail brands where staff and customer support know personal preferences to better service customers as appealing
  • 52 percent of retailers think consumers are concerned about data being passed onto third parties while 81 percent of consumers say they’d consider removing their personal information if they could and 53 percent of consumers are concerned that their data is being passed onto third parties

Four Loyalty Typologies

Retail 2018: The Loyalty Divide uncovered four typologies of consumer behavior including: The Broadcaster who may flit between brands but shouts about their experiences good or bad; The Enthusiast an engaged retail brand follower who is loyal but not loud; The Lazy Loyal typically unengaged but tend to be loyal to brands because it’s easy to be; and, The Seeker who likes to shop around for the best value and holds little affinity to retail brands.

The Broadcaster
  • 32 percent of consumers will recommend to others the retailers they are most loyal to
  • 41 percent would share photos on social media of great retail experiences in exchange for rewards
  • 47 percent would feature the retailer or its products on their social media accounts in exchange for offers/rewards
  • 42 percent would submit a product review through YouTube in exchange for an offer/reward

The Enthusiast
  • 43 percent of consumers are most loyal to brands that they have a high opinion of
  • 1 in 5 consumers (20 percent) will follow their favorite brands on social media
  • 71 percent say product quality and 59 percent an enjoyable shopping experience are most important to them
  • 51 percent say it’s important that they can engage with new and exciting products from brands they are loyal to

The Lazy Loyal
  • 60 percent say convenient store locations are most important to them
  • More than 1 in 3 (40 percent) will typically stick to the brands they like rather than shop around
  • 1 in 4 consumers would not find a loyalty program that can be used across multiple brands appealing
  • 72 percent think an effortless loyalty program where points are automatically redeemed is appealing

The Seeker
  • 66 percent choose a retailer because of competitive prices/promotions
  • 56 percent would exchange personal details in exchange for a personalized offer or promotion
  • 53 percent of consumers would always ‘shop around’ for different retailers to shop with
  • Almost 1 in 5 (19 percent) would rarely sign up to retailer loyalty programs
Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com

About Oracle Industry Connect

For more information about how Oracle is committed to empowering organizations through best-in-class, industry-specific business solutions, visit oracle.com/industries. To learn more about Oracle Industry Connect 2018, go to oracle.com/oracleindustryconnect.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail

Trademark

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

Oracle’s Moat Unveils New Invalid Traffic Detection Capabilities, Achieves Rigorous MRC Accreditation for SIVT of Desktop and Mobile Web Traffic

Oracle Press Releases - Thu, 2018-04-12 06:00
Press Release
Oracle’s Moat Unveils New Invalid Traffic Detection Capabilities, Achieves Rigorous MRC Accreditation for SIVT of Desktop and Mobile Web Traffic Moat’s Advanced Functionality Uses Insights from Oracle’s Proprietary Data Assets to Help Identify Bots & Other Invalid Traffic

Redwood Shores, Calif.—Apr 12, 2018

Oracle’s Moat, a SaaS analytics measurement provider for marketers and publishers that is part of the Oracle Data Cloud, today announced new capabilities to detect invalid traffic, including traffic from sophisticated bots designed to look like consumers. Moat’s new functionality utilizes insights gained from Oracle’s proprietary data assets, including its Zenedge and Dyn acquisitions.

Highlighting the importance of these efforts, Moat also announced it has achieved the rigorous accreditation standards set by the Media Rating Council (MRC) for its desktop and mobile web Sophisticated Invalid Traffic (SIVT) detection.

“Collaborating with teams across Oracle has given us access to proprietary intelligence and approaches that uniquely position us to deliver even deeper insights for our customers and help drive business outcomes,” said Jonah Goodhart, SVP of Oracle Data Cloud and Co-Founder of Moat. “Along with our MRC accreditation, this strengthens the value of our invalid traffic detection capabilities as we continue equipping brands to make smarter decisions, improve transparency and move our industry forward.”

In addition, Moat has been granted accreditation by the Media Rating Council (MRC) for its detection of Sophisticated Invalid Traffic (SIVT) across desktop and mobile web. The milestone comes at a more crucial time than ever, as sources of invalid traffic have grown in complexity over the past year. In accordance with MRC guidelines, Moat’s platform includes a new breakout of GIVT and SIVT rates to give publishers and marketers an extra level of insight into traffic sources. These metrics are mutually exclusive, filtered first for GIVT before reporting SIVT.

Other enhancements to Moat’s platform arm brands with advanced metrics, including:

  • Hidden Ad Rate—As part of Moat’s SIVT detection, this metric quantifies ads that are hidden from users for the entire duration of an impression, including the detection of different types of Hidden Ads.
  • Session Hijacking—Another component of Moat’s SIVT detection, this metric detects manipulated human activity, such as when a user session is forcibly redirected to another website, tab, or app store.
  • Invalid Domain Rate—An update to Moat’s Invalid Domain detection captures sophisticated domain-spoofing that attempts to take advantage of marketers and premium publishers alike.

“We congratulate Moat on the significant achievement of earning MRC accreditation for its Sophisticated Invalid Traffic detection and filtration capabilities for desktop and mobile web traffic,” said George W. Ivie, Executive Director and CEO of the MRC. “With the industry focused on improving transparency and reducing waste throughout the digital media supply chain, Moat’s MRC accreditation for SIVT once again clearly demonstrates that it’s at the forefront of industry leaders in promoting high quality digital measurement.”

“Transparency is critical to overcoming the impact of invalid traffic. Our integration with Oracle’s Moat opens new doors to greater measurement and gives us a better understanding of how viewability metrics, media quality and inventory integrity holistically impact the bottom line,” said Robert Stone, Senior Director, Digital Center of Excellence, Dr Pepper Snapple Group.

“The threat of invalid traffic impacts the overall health of our ecosystem,” said Luis Di Como, Senior Vice President of Media, Unilever. “Now more than ever we need technology offerings, like Moat’s platform, that deliver greater transparency and accountability by providing advanced metrics. It’s equally important that solutions providers work with the MRC to meet its rigorous standards for traffic validation and filtration.”

To learn more about invalid traffic, register for Moat’s webinar here: http://info.moat.com/invalid-traffic-webinar.html.

Contact Info
Simon Jones
Oracle
650.506.0325
s.jones@oracle.com
About Oracle’s Moat

Oracle’s Moat is a global analytics provider focused on making marketers and publishers more effective. From real-time attention metrics and intelligence to cross-platform measurement and new currencies, Moat offers solutions that make branding and storytelling work better. Its products include Moat Pro, which provides users with detailed snapshots of ad activity on the web, and Moat Analytics, a measurement platform that goes beyond traditional metrics like impressions or clicks to focus on attention. Moat was acquired by Oracle in 2017 and remains an independent platform within Oracle Data Cloud, which uses data and analytics to enhance media for leading marketers and publishers. Some of largest brands and publishers in the world rely on Moat as a trusted provider. For more information on Oracle’s Moat, please visit www.moat.com.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle, please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Simon Jones

  • 650.506.0325

A DBA’s first steps in Jenkins

Amis Blog - Thu, 2018-04-12 02:56

My Customer wanted an automated way to refresh an application database to a known state, to be done by non-technical personnel. As a DBA I know a lot of scripting, can build some small web interfaces, but why bother when there are ready available tools, like Jenkins. Jenkins is mostly a CI/CD developer thing that for a classical DBA is a bit of magic. I decided to try this tool to script the refreshing of my application.

Successs

 

Getting started

First, fetch the Jenkins distribution from https://jenkins-ci.org, I used the jenkins.war latest version. Place the jenkins.war file in a desired location and you’re almost set to go, set the environment variable JENKINS_HOME to a sane value, or else your Jenkins settings, data and workdir will be in $HOME/.jenkins/

Start Jenkins by using the following commandline:

java -jar jenkins.war --httpPort=8024

You may want to make a start script to automate this step. Please note the –httpPort argument: choose a available portnumber (and make sure the firewall is opened for this port)

When starting Jenkins for the first time it creates a password that it will show in the standard output. When you open the webinterface for Jenkins for the first time you need this password. After logging in, install the recommended plugins. In this set there should be at least the Pipeline plugin. The next step will create your admin user account.

Creating a Pipeline build job.

Navigate to “New Item” to start creating your first pipeline. Type a descriptive name, choose as type a Pipeline

myfirstpipeline

After creating the job, you can start building the pipeline: In my case I needed about four steps: stopping the Weblogic servers,
clearing the schemas, importing the schemas and fixing stuff, and finally starting Weblogic again.

The Pipeline scripting language is quite extensive, I only used the bare minimum of the possibilities, but at least it gets my job done. The actual code can be entered in the configuration of the job, in the pipeline script field. A more advanced option could be to retrieve your Pipeline code (plus additional scripts) from a SCM like Git or Bitbucket.

empty_pipeline

 

The code below is my actual code to allow the refresh of the application:

pipeline {
    agent any
    stages {
        stage ('Stop Weblogic') {
            steps { 
                echo 'Stopping Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/wlst.sh /home/oracle/scripts/stopServers.py'
            }
        }
        stage ( 'Drop OWNER') {
            steps {
                echo "Dropping the Owner"
                sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; sqlplus /@theSID @ scripts/drop_tables.sql"'
            }
        }
        stage ( 'Import OWNER' ) {
            steps {
                echo 'Importing OWNER'
                sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; impdp /@@theSID directory=thedirforyourdump \
                            dumpfile=Youknowwhichfiletoimport.dmp \
                            logfile=import-`date +%F-%h%m`.log \
                            schemas=ONLY_OWNER,THE_OTHER_OWNER,SOME_OTHER_REQUIRED_SCHEMA \
                            exclude=USER,SYNONYM,VIEW,TYPE,PACKAGE,PACKAGE_BODY,PROCEDURE,FUNCTION,ALTER_PACKAGE_SPEC,ALTER_FUNCTION,ALTER_PROCEDURE,TYPE_BODY"', returnStatus: true

				 echo 'Fixing invalid objects'           
                 sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; sqlplus / as sysdba @?/rdbms/admin/utlrp"'    
				 
                 echo 'Gathering statistics in the background'
                 sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; sqlplus /@theSID @ scripts/refresh_stats.sql"'
            }
        }
        stage ( 'Start Weblogic' ) {
            steps {
                echo 'Starting Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/wlst.sh /home/oracle/scripts/startServers_turbo.py'
            }
        }
    }
}

In this script you can see the four global steps, but some steps are more involved. In this situation I decided not to completely drop the schemas associated with the application, the dump file could come from a different environment with different passwords. Additionally I only import here the known schemas, if the supplied dumpfile accidentally contains additional schemas the errors in the log would be enormous due to not creating the useraccounts in the import stage.

When the job is saved, you can try a Build, this will run your job, you can monitor the console output to see how your job is going.

SQL*Plus with wallet authentication

The observant types among you may have noticed that I used a wallet for authentication with SQL*Plus and impdp. As this tool would be used by people who should not get DBA passwords, using a password on the commandline is not recommended: note that all the command above and their output would be logged in plaintext. So I decided to start making use of a wallet for the account information. Most steps are well documented, but I found that the step of making the wallet autologin capable (not needing to type a wallet password all the time) was documented using the GUI tool, but not the commandline tool. Luckily there are ways of doing that on the command line.

mkdir -p $ORACLE_HOME/network/admin/wallet
mkstore -wrl $ORACLE_HOME/network/admin/wallet/ -create
mkstore -wrl $ORACLE_HOME/network/admin/wallet -createCredential theSID_system system 'YourSuperSekritPassword'
orapki wallet create -wallet $ORACLE_HOME/network/admin/wallet -auto_login

sqlnet.ora needs to contain some information so the wallet can be found:

WALLET_LOCATION =
  (SOURCE =    (METHOD = FILE)
   (METHOD_DATA =      (DIRECTORY = <<ORACLE_HOME>>/network/admin/wallet)    )  )
SQLNET.WALLET_OVERRIDE = TRUE

also make sure a tnsnames entry is added for your wallet credential name (above: theSID_system) now using sqlplus /@theSID_system should connect you to the database as the configured user.

Asking Questions

The first job was quite static: always the same dump, or I need to edit the pipeline code to change the named dumpfile… not as flexible as I would like… Can Jenkins help me here? Luckily, YES:

    def dumpfile
    def dbhost = 'theHost'
    def dumpdir = '/u01/oracle/admin/THESID/dpdump'

    pipeline {
    agent any
    stages {
        stage ('Choose Dumpfile') {
            steps {
                script {
                    def file_collection
                    file_collection = sh script: "ssh $dbhost 'cd $dumpdir; ls *X*.dmp *x*.dmp 2>/dev/null'", returnStdout: true
                    dumpfile = input message: 'Choose the right dump', ok: 'This One!', parameters: [choice(name: 'dump file', choices: "${file_collection}", description: '')]
                }
            }
        }
        stage ('Stop Weblogic') {
            steps { 
                echo 'Stopping Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/wlst.sh /home/oracle/scripts/stopServers.py'
            }
        }
        stage ( 'Drop OWNER') {
            steps {
                echo "Dropping Owner"
                sh script: 'ssh $dbhost "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; sqlplus /@theSID @ scripts/drop_tables.sql"'
            }
        }
        stage ( 'Import OWNER' ) {
            steps {
                echo 'Import OWNER'
                sh script: "ssh $dbhost 'export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; impdp /@theSID directory=dump \
                            dumpfile=$dumpfile \
                            logfile=import-`date +%F@%H%M%S`.log \
                            schemas=MYFAVOURITE_SCHEMA,SECONDOWNER \
                            exclude=USER,SYNONYM,VIEW,TYPE,PACKAGE,PACKAGE_BODY,PROCEDURE,FUNCTION,ALTER_PACKAGE_SPEC,ALTER_FUNCTION,ALTER_PROCEDURE,TYPE_BODY'", returnStatus: true
                            
                 sh script: 'ssh $dbhost "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; sqlplus / as sysdba @?/rdbms/admin/utlrp"'
                            
                 sh script: 'ssh dbhost "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; sqlplus /@theSID @ scripts/refresh_stats.sql"'
            }
        }
        stage ( 'Start Weblogic' ) {
            steps {
                echo 'Starting Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/wlst.sh /home/oracle/scripts/startServers_turbo.py'
            }
        }
    }
}

The first stage actually looks at the place where all the dumpfiles are to be found and does a ls on it. This listing is then stored in a variable that will be split into choices. The running job will wait for input, so no harm is done until the choice is made.

Starting a build like this will pause, you can see that when looking at the latest running build in the build queue.

When clicking the link the choice can be made (or the build can be aborted)

 

 

 

 

 

 

The post A DBA’s first steps in Jenkins appeared first on AMIS Oracle and Java Blog.

Introducing Build Pipeline in Oracle Developer Cloud

OTN TechBlog - Wed, 2018-04-11 16:05

With our current release we are introducing a new build engine in Oracle Developer Cloud. This new build engine is called Mako. The new build engine also comes with a new enhanced functionality and user interface in Oracle Developer Cloud Service ‘Build’ tab for defining build pipelines visually. This was the much awaited functionality in Oracle Developer Cloud from the Continuous Integration and Continuous Delivery perspective.

So what is changing in Developer Cloud build?

The below screen shot shows the user interface for the new ‘Build’ tab in Oracle Developer Cloud. A quick glance at it tells you that there is a new tab called ‘Pipeline’ being added alongside the ‘Jobs’ tab. So the concept of creating build jobs remains the same. We have Pipeline in addition to the build jobs that you can create.

Creating of build job has gone through a change as well. When you try to create a build job by clicking the ‘+New Job’ button in the Build tab, you will have a dialog box to create a new build job. The first screen shot shows the earlier ‘New Job ‘ dialog where you could give the job name and select to create a freestyle job or copy an existing build job.

The second screen shot shows the latest ‘New Job’ dialog that comes up in Oracle Developer Cloud.  It has a Job name, description (which you could give in the build configuration interface earlier), create new/copy existing job options, check box to select ‘use for merge request’ and the most noticeable addition the Software Template dropdown.

Dialog in the old build system:

Dialog in the new build system:

What these additional fields in the ‘New Job’ dialog mean?

Description: To give the job description, which you could give in the build configuration interface earlier. You will still be able to edit it in the build configuration as part of the settings tab.

Use for Merge Request: By selecting this option, your build will be parameterized to get the Git repo URL, Git repo branch and Git repo merge id and perform the merge as part of the build.

Software Template: With this release you will be using your own Oracle Compute Classic to run/execute your build jobs. Earlier the build jobs were executed on internal pool of compute. This gives you immense flexibility to configure you build machine using the software runtimes that you need using the user interface that we provide as part of the Developer Cloud Service. These configuration will stay and the build machines will not be claimed back as it is your own compute instance. This will also enable you to run multiple parallel builds without any constraint by spinning up new computes as per your requirements. You will be able to create multiple VM templates with different software configurations and choose them while creating build jobs as per your requirement.

Please use this link to refer the documentation for configuring Software Templates.

Build Configuration Screen:

In the build configuration tab you will now have two tabs as seen in the screen shot below.

  1. Build Configuration
  2. Build Settings

As seen in the screenshot below, the build configuration tab would further have Source Control tab, Build Parameters, Build Environment, Builders and Post Build sub tabs.

While in the build settings tab, you will have sub tabs such as General, Software, Triggers, and Advanced. Below are the brief description of each tab:

General: As seen in the screenshot below is for generic build job related details. It is similar to the Main tab which existed previously.

Software: This tab is a new introduction in the build configuration to support Software Templates for build machines, which is getting introduced in our current release as described above. It will let you change/see the software template that you have selected while creating the build job and also let you see the software (runtimes) available in the template. Please see the screenshot below for your reference.

Triggers: You will be able add build triggers like Periodic Trigger and SCM Polling Trigger as shown in the screenshot below. This is similar to the Triggers tab that existed earlier.

Advanced: Consists of some build settings related to aborting job conditions, retry count and adding timestamp to the console output.

In the Build Configuration Tab

There are four tabs in the Build Configuration tab as described below:

Source Control: You can add Git as the Source Control from the dropdown-‘Add Source Control’.

 

Build Parameters: Apart from the existing build parameters like String Parameter, Password Parameter, Boolean Parameter, Choice Parameter, there is a new parameter type being added called Merge Request Parameters. The Merge Request Parameters get added automatically when the checkbox ‘Use for Merge Request’ is selected while creating the build job. This will add Git repo URL, Git repo branch and Git repo merge id as the build parameters.

Build Environment: A new Build Environment settings have been added apart from the existing Xvfb Wrapper, Copy Artifacts and Oracle Maven Repository Connection, which is SonarQube Settings.

SonarQube Settings – For static code analysis using SonarQube tool. I will be publishing a separate blog on SonarQube in Developer Cloud.

Builders: To add build steps. There is an additions to the build steps, which is Docker Builder. 

Docker Builder: Support to build Docker images and execute any Docker command. (Will be releasing a separate blog for dockers.)

Post Build: To add Post Build configurations like deployment. SonarQube Result Publisher is the new Post Build configuration added in the current release.

Pipelines

After creating and configuring the build jobs, you can create a pipeline in the Pipelines tab using these build jobs. You can create a new pipeline using the ‘+New Pipeline’ button.

You will see the below dialog to create a new pipeline.

On creation of the Pipeline, you can drag and drop the build jobs using the Pipeline visual editor, sequence and connect the build jobs as per the requirement.

You can also add conditions to the connection for execution by double clicking the links and selecting the condition from the dropdown, as shown below in the screenshot.

Once completed, the pipeline will be listed in the Pipelines tab as shown below.

 

You can start the build manually using the play symbol button. We can also configure it to Auto Start when one of the job is executed externally.

Stay tuned for more blogs on latest features and capabilities of Developer Cloud Service. 

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

 

Fluid in Seattle! Special Fluid Training Event

Jim Marion - Wed, 2018-04-11 14:07
May 23, 2018
PeopleTools Fluid UI Training
8.54 through 8.56
Led by Jim Marion

SpearMC and jsmpros are co-hosting a PeopleTools Fluid training event in Redmond, Washington immediately following the Spring PeopleSoft Northwest Regional User Group meeting. Through this event I will cover the exact same material I regularly teach online, but in person for a 40% discount off the online price. The event runs from Wednesday May 23 to Friday May 25 at the exact same venue as the Northwest Regional User Group meeting, the Seattle Marriott Redmond 7401 164th Avenue Northeast, Redmond, WA 98052. Additional details and registration information are available on the Registration Website.

Registration and More Information!

Building Docker on Oracle Developer Cloud Service

OTN TechBlog - Wed, 2018-04-11 13:40

The much awaited Docker build support on Oracle Developer Cloud Service is here. Now you will be able to build Docker images and execute Docker commands as part of the Continuous Integration and Continuous Deployment pipeline.

This blog covers the description of how and what you can do with the Docker build support on Developer Cloud Service. It will give an understanding of the Docker commands that we can run/execute on Developer Cloud as part of the build job.

Note: There will be a series of blogs following up on using Docker build on Developer Cloud covering different technology stacks and usage.

Build Job Essentials:

Pre-requisite to be able to run Docker commands or use Docker Build steps in the build job is that we should select a software template which has Docker included as a software bundle. Selecting the template with Docker ensures that the Build VM which gets instantiated using the selected software template has Docker runtime installed on it, as shown in the below screen shot. The template names may vary in your instance.

To know about the new build system you can read this blog link. Also you can refer the documentation on configuring the Build VM.

 

You will be able to verify whether Docker is part of the slected VM or not by navigating to Build -> <Build Job> -> Build Settings -> Software

You can refer this link to understand more about the new build interface on Developer Cloud.

 

Once you have the Build Job created with the right Software Template selected as described above, go to the Builders tab in the Build Job and click on the Add Builder. You will see Docker Builder in dropdown as shown in the screen shot below. Selecting Docker Builder would give you Docker command options which are given out of the box.

You can run all other Docker commands as well, by selecting Unix Shell Builder and writing your Docker command in it.

In the below screen shot you can see two commands selected from the Docker Builder menu.

Docker Version – This command interface prints the Docker version installed on your Build VM.

Docker Login – Using this command interface you can login and create connection with the Docker Registry. By default it is DockerHub but you can use Quay.io or any other Docker registry available over the internet. If you leave the Registry Host empty then by default it will connect to DockerHub.

 

Docker build – Using this command interface you can build a Docker Image in Oracle Developer Cloud.  You will have to have a Dockerfile in the Git repository that you will be configuring in the Build Job. The path of the Dockerfile has to be mentioned in the Dockerfile field. In case the Dockerfile resides in the build context root, you can leave the field empty. You will have to give the image name.

 

Docker Push – Now to push the Docker image that you have built using Docker Build command interface to the Docker Registry. You will have to first use Docker Login to create a connection to the Docker Registry where you want to push the image. Then use the Docker Push command giving the exact name of the image built as you had given in Docker Build command.

 

Docker rmi – To remove the Docker images we have build.

As mentioned previously, you can run any Docker command in Developer Cloud.  If the UI for the command is not given, you can use Unix Shell Builder to write and execute your Docker command.

In my follow up blog series I will using a combination of the out of the box command interface and Unix Shell Builder to execute Docker commands and get build tasks accomplished. So watch out for the upcoming blogs here.

Happy Dockering on Developer Cloud!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Build and Deploy .Net Code using Oracle Developer Cloud

OTN TechBlog - Wed, 2018-04-11 13:27

The much awaited support for building and deploying .Net code on Oracle Cloud using Developer Cloud Service is here.

This blog post will show how you can use Oracle Developer Cloud Service to build .Net code and deploy it on Oracle Application Container Cloud. It will show how the newly released Docker build support in Developer Cloud can be leveraged to perform the build.

Technology Stack Used:

Application Stack: .Net for developing ASPX pages

Build: Docker for compiling the .Net code

Packaging Tool: Grunt to package the compiled code

DevOps Cloud Service: Oracle Developer Cloud

Deployment Cloud Service: Oracle Application Container Cloud

OS for Development: Windows 7

 

.Net application code source:

 The ASP.Net application that we would be building and deploying on Oracle Cloud using Docker can be downloaded from the Git repository on GitHub. Below is the link for the same.

https://github.com/dotnet/dotnet-docker-samples/tree/master/aspnetapp

If you want to clone the GitHub repository then use the below git command after installing the git cli on your machine.

git clone https://github.com/dotnet/dotnet-docker-samples/

After cloning the above mentioned repository you can pick to use the aspnetapp. Below is the folder structure of the cloned aspnetapp.

 

Apart from the four highlighted files in the screen shot below, which are essential for the deployment, rest all the other files and folder are part of the .Net application.

Note: You may not be able to see the .get folder as you have not initialized the Git repository.

Now we need to initialize the Git repository for the aspnetappl as we will be pushing this code to the Git repo hosted on Oracle Developer Cloud. Below are the commands that you can use on you command line after installing Git Cli and configuring the same in the path.

Command prompt > cd <to the aspnetappl folder>

Command prompt > git init

Command prompt > git add –all

Command prompt > git commit –m “First Commit”

Above mentioned git commands will initialize the git repository locally in the application folder. And then add all the code in the folder to the local Git repository using the git add –all command.

Then commit the added files by using the git commit command, as shown above.

Now go to Oracle Developer Cloud project and create a Git repository for the .Net code to be pushed to. For the purpose of this blog I have created the Git repository by clicking the ‘New Repository’ button and named it as ‘DotNetDockerAppl’, as shown in the screenshot below. You may choose to name it as per your choice.

Copy the Git repository URL as shown below.

Then add the URL as the remote repository to the local Git repository that we have created using the below command:

 Command prompt > git remote add origin <Developer Cloud Git repository URL>

The use the below command to push the code to the master branch of the Developer Cloud hosted Git repository.

Command prompt > git push origin master

 

Deployment related files that need to be created: Dockerfile

This file will be used by the Docker Tool to build the Docker image with the .Net core installed and it would also include the .Net application code, cloned from the Developer Cloud Git repository. You will be getting the Dockerfile as part of the project. Please replace the existing Dockerfile script with the one below.

 

FROM microsoft/aspnetcore-build:2.0

WORKDIR /app

# copy csproj and restore as distinct layers

COPY *.csproj ./

RUN dotnet restore

# copy everything else and build

COPY . ./

RUN dotnet publish -c Release -r linux-x64

In the above script we download the aspnetcore-build:2.0 image, then create a work directory where we copy the .csproj file and then copy all the code from the Git repo. Finally use the ‘dotnet’ command to publish the compiled code, compliant with linux-x64 machine.

manifest.json

This file is essential for the deployment of the .Net application on the Oracle Application Container Cloud.

{

 "runtime":{

 "majorVersion":"2.0.0-runtime"

 },

 "command": "dotnet AspNetAppl.dll"

}

The command attribute in the json, specifies the dll to be executed by the dotnet command. It also specifies the .Net version to be used for executing the compiled code.

 

Gruntfile.js

This file defines the build task and is being used by the Build file to identify the deployment artifact type that needs to be generated, which in this case is a zip file and also the files from the project that need to be included in the build artifact. For the .Net application we would only need to include everything in the publish folder including the manifest.json for Application Container Cloud deployment. The folder is defined in the src attribute as shown in the code snippet below.

 

/**

 * http://usejsdoc.org/

 */

module.exports = function(grunt) {

  require('load-grunt-tasks')(grunt);

  grunt.initConfig({

    compress: {

      main: {

        options: {

          archive: 'AspNetAppl.zip',

          pretty: true

        },

        expand: true,

        cwd: './publish',

        src: ['./**/*'],

        dest: './'

      }

    }

  });

  grunt.registerTask('default', ['compress']);

};

package.json

Since Grunt is Nodejs based build tool, which we are using in this blog to build and package the deployment artifact, we would need the package json file to define the dependencies required for the Grunt tool to execute.

{

  "name": "ASPDotNetAppl",

  "version": "0.0.1",

  "private": true,

  "scripts": {

    "start": "dotnet AspNetAppl.dll"

  },

  "dependencies": {

    "grunt": "^0.4.5",

    "grunt-contrib-compress": "^1.3.0",

    "grunt-hook": "^0.3.1",

    "load-grunt-tasks": "^3.5.2"

  }

}

Once all the code is pushed to the Git repository hosted on Oracle Developer Cloud. Below screenshot, shows how you can browse and verify your code by going to the Code tab and selecting the appropriate Git repository and branch in the respective dropdowns on top of the files list.

 

Build Job Configuration on Developer Cloud

We are going to use the newly introduced Mako build instead of the Hudson build system in DevCS.

Below are the build job configuration screen shots for the ‘DotNetBuild’ which will build and deploy the .Net application:

Create a build job by clicking on the “New Job” button. Give a name of your choice to the build job. For this blog I have named it as ‘DotNetBuild’. You will also need to select the Software Template which contains Docker and Nodejs runtimes. In case you do see the required software template in the dropdown,as shown in the screenshot below, you will have to configure the same from Organization -> VM Template menu. This will kick start the Build VM with the required software template. To understand and learn more about configuring VM and VM Templates you can refer this link.

 

Now go to the Builders tab where we would configure the build steps. First we would select execute shell where we would build the Docker image using the Dockerfile in our Git repository. Then create a container for the same (but not start the container). Then copy the compiled code to the build machine from the container and then use npm registry to download the grunt build tool dependencies. Finally, use the grunt command to build the AspNetAppl.zip file which will be deployed on Application Container Cloud.

 

 

Now configure the PSM Cli and configure the credentials for your ACCS instance along with the domain name. Then again configure Unix Shell builder where you will have to provide the psm command to deploy the zip file on Application Container that we have generated earlier using Grunt build tool.

Note: All this will be done in the same ‘DotNetBuild’ build job that we have created earlier.

 

AS part of the last part of build configuration, in the Post Build tab configure the Artifact Archiver as show below, to archive the generated zip file for deployment.

 

Below screen shot show the ‘DotNet’ application deployed on Application Container Cloud service console. Copy the application URL as shown in the screen shot. The URL will vary for your cloud instance.

 

Use the copied URL to access the deployed .Net application on a browser. It will look like as shown in the below screen shot.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

 

Interval partition on index organized table

Tom Kyte - Wed, 2018-04-11 13:26
Hi, Some points related to my query, 1. We are going to do partition for table, have almost 1.7 TB in size and have 6 Indexes. 2. We are planing to Use Exchange Partition method. 3. In that table have "Operation_date" column which we want to...
Categories: DBA Blogs

function that will return the text between <a> and </a>

Tom Kyte - Wed, 2018-04-11 13:26
Hi tom, Suppose i have a text like <a>hkjfsdfjashkdfhask_75274_jhsdfbajh</a> Now i need a pl/sql function in which i will send <a> and </a> as parameters . Then the function will return me the text of above hkjfsdfjashkdfhask_75274_jhsdfbajh . ...
Categories: DBA Blogs

ORA-10260: limit size (1048576) of the PGA heap set by event 10261 exceeded when XMLROOT is used

Tom Kyte - Wed, 2018-04-11 13:26
We are hitting below error when we try to insert version and encoding to xmltype variable, using XMLROOT in our plsql code: <code>ORA-10260: limit size (1048576) of the PGA heap set by event 10261 exceeded DIAGNOSTIC ANALYSIS: --------------...
Categories: DBA Blogs

Question on num_rows in dba_indexes

Tom Kyte - Wed, 2018-04-11 13:26
Hi - I just ran a full schema stats on one of our schemas. After the full schema stats was complete, I checked the num_rows in dba_tables and num_rows in dba_indexes. The num_rows from dba_tables looks good but num_rows from dba_indexes shows as 0 fo...
Categories: DBA Blogs

Learn and Connect with Oracle Support at COLLABORATE18

Chris Warticki - Wed, 2018-04-11 10:24

COLLABORATE 18 is less than two weeks away and Oracle Services will be front and center at this annual user conference to share best practices for adopting and optimizing Oracle technology. Oracle Support experts will be available in multiple sessions, at the exhibition hall, and in networking events throughout the conference to engage with you and illustrate how Oracle offers trusted, full-featured security, and comprehensive support. Find out how you can get more value from your Oracle Support investments.

Oracle Support Experts are Onsite to Talk with You

Don't miss the opportunity to interact with Oracle Support at the Support Stars Bar. Stop by the Oracle booth within the Exhibition Hall at the Mandalay Bay (booth #855) Monday, April 23, through Wednesday, April 25, for answers to your toughest support questions and learn about proactive support tools and services. Concerned about cybersecurity? You can also talk to us about security and steps you should take to help prevent cybercrime. You'll leave with valuable tips and best practices for maintaining, supporting, and upgrading Oracle products—from systems and databases to middleware, applications, and cloud.

Trusted | Secure | Comprehensive

Learn from Oracle Support experts in sessions on popular Oracle technology topics:

The vagrant way of provisioning - an introduction

Darwin IT - Wed, 2018-04-11 09:16
About 2003, I guess, I was introduced into VMWare by a colleague. I was hooked about right away.
Since then I made numerous VMs for as many purposes. I played around with several VMWare products, but since Oracle acquired Sun, I stuck with VirtualBox.

A few years ago the tool Vagrant was mentioned to me. But I did not get the advantage of it, since all I needed to do I could do using VirtualBox.

However, over the years I found that maintaining VM's is a tedious job. And often I create and use a VM, but shut it down for months. And when I need it again, I don't the state anymore. Although you can use snapshots, it's nice to be able to start with a fresh install again.

In between, Oracle can have come up with another (minor) version of Fusion Middleware. Oracle Linux can have an new upgrade. There's a new patch set. Then you want to do a re-install of the software. And I find it nice that I can drop a VM and recreate it from scratch again.

For those purposes Vagrant comes in handy. It allows you to define a Box, based on a template, a base box, and configure it by configure CPU and memory settings, add disks and then after first boot, provision it.
So if I want to adapt a VM with a slightly different setting, or I need an extra disk, I destroy my box, adapt my Vagrant project file, and boot my box up again.

So, let's see what Vagrant is, actually. Then in a follow-up, I'll explain how I setup my Vagrant Project.


Vagrant is an open-source software product for building and maintaining portable virtual software development environments. So it helps you in creating and building Virtual Machines, especially in situations where you need to do that regularly and distribute those.

It simplifies software configuration management of virtualizations, to increase development productivity. Vagrant automates both the creation of VM’s and the provisioning of created VM's.  It does this by abstracting the configuration of the virtualization component and the installation/setup of the software within the VM, via a project file.

The architecture distinguishes two building blocks: Providers and Provisioners.
Providers are services to set up and create VMs, for instance:
  • VirtualBox
  • Docker
  • Vmware
  • AWS
Provisioners are tools to customize the configuration of VM, for example, configure the guest OS and install software within the VM. Possible provisioners are: 
  • Shell
  • Ansible
  • Puppet
  • Chef
I haven't made my self familiar with Ansible or Puppet, yet, (still on my list), so I work with the default provisioner: Shell.

A Vagrant project is in fact a folder with the Vagrant file in it. The vagrant file contains all the configuration of the resulting Vagrant box, the actual created VM.

A Vagrant project is always based on  a base box. Often, a downloadable box from a Vagrant repository is used. In fact, if you don't specify an url, but only a name, it will try to find it from the Vagrant repository. A popular one is the hashicorp/precise64, used in many examples. However, I prefer to use my own local box. For two main reasons:
  • I then know what's init.
  • It's local, so I don't have to download it.
To be able to be used by Vagrant, a box has the following requirements:
  • It contains an actual VM, with OS installed in it.
  • A vagrant user is defined, with sudo rights, and an insecure key (downloadable from vagrant's github, but it will be replaced by a generated secure key at first startup), but  you can specify a password (as I do).
  • NAT network adapter as a first NIC.
  • SSH deamon running.
There is a tool, called Packer, that is  able to create a box, with an OS installed. I haven't tried it, but actually, I created a very simple VM in VirtualBox, installed Oracle Linux 7 Update 4, with the server with gui option as a next-next-finish install in it, defined the vagrant user as mentioned in it. And then with the vagrant package command I got the particular box. I had a few iterations to get it as I wanted it. But once you get it right, you should not need to touch it. Unless another Linux update comes along.

Now I have only one simple base box, and I only need to define different vagrant projects and a stage folder with the latest greatest on the software downloaded. And a simple vagrant up command will create my VM a new, and install all the software in it.

Last year, on the NLOUG's Tech Experience '17, together with my colleague Rob, I spoke about how to script a complete Oracle Fusion MiddleWare environment. It was a result of a series of projects we did up to the event, where we tried to automate the environment creation as much as possible. See my series of blogs on the matter. In the upcoming period, I plan to write about how to leverage these scripts with Vagrant to set up a complete VM, with the latest greatest FMW in it.

So stay tuned.




Virtual Technical Training: Oracle Code- Online

Oracle Code Online is an online event that includes technical demonstrations and presentations from community advocates, Oracle ACEs, product leads, and Java Champions, under the Oracle Developers...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator