Feed aggregator

Study: HR and Finance Say Short-Termism and Culture Clashes Are Biggest Barriers to Collaboration

Oracle Press Releases - Tue, 2019-01-29 07:00
Press Release
Study: HR and Finance Say Short-Termism and Culture Clashes Are Biggest Barriers to Collaboration Global study examines how data and analytics can bring finance and HR teams together

Redwood Shores, Calif.—Jan 29, 2019

A short-term mindset and entrenched cultural habits are the biggest barriers to collaboration between HR and finance teams according to a new study from Oracle. The study of 1,510 HR, finance and business professionals found that in order to successfully unlock the value from data and help their organizations adapt to the changing nature of the global talent market, HR teams need to rethink analytics technology, skills and processes to improve collaboration with finance and drive a competitive advantage.

“HR and finance departments bring different, yet complementary skills to the table. While they traditionally have not worked together closely, that needs to change in order for organizations to create a competitive advantage in today’s evolving market and talent economy,” said Donald Anderson, Director, Organization & Talent Development, Oracle. “The first step to overcoming traditional barriers and bringing HR and finance teams together is having a collaborative mindset with the right skillsets to both gather and analyze data so that it can be used to make impactful business decisions. That alone will deliver significant benefits to an organization’s performance.”

Having Data is Not the Same as Being Able to Use it Effectively

The global talent market is more competitive than ever with the rise of new technologies, climbing costs of recruitment and increasing demand for new skills. To be successful in this rapidly changing market, HR teams need to rethink their approach to analytics, skills and collaboration to drive a competitive advantage.

  • 95 percent of HR and finance professionals plan to make data-driven collaboration a priority in 2019
  • To act on data in a meaningful way, HR and finance teams will need to acquire new skills. The survey found that 49 percent cannot currently use analytics to forecast outcomes and 81 percent are unable to determine future actions based on predictive data.
  It’s Not About More Technology, it’s About You

While data and analytics have proliferated HR and finance, the benefits are limited without effective collaboration and the ability to derive value. In order to reap the rewards, both departments must overcome short-termism, break through culture clashes and shrink the skillset gap.

  • The biggest barrier to collaboration between HR and finance is a short-term mindset, with 71 percent saying their teams focus on quarters rather than future strategic direction.
  • Culture clashes between departments was another top challenge with nearly a third (29 percent) ranking traditionally separate habits as the biggest barrier. Other barriers included mismatched skillsets (27 percentage) and organizational silos (17 percentage).
  • HR teams also lack the skills to act on data and solve issues (70 percent), cultivate quantitative analysis and reasoning (67 percent) and use analytics to forecast workforce needs (55 percent).
  HR and Finance Leaders Need One View of the Truth

The majority (80 percent) of organizations believe HR and finance teams are already helping them make better data-driven decisions. But, their teams will need to acquire new skills, but with an increased focus on collaboration, organizations will be able to gain even bigger business benefits:

  • 88 percent of respondents believe HR and finance collaboration will improve business performance; 76 percent believe it will enhance organization agility.
  • Over half (57 percent) of organizations plan to achieve more holistic, enterprise-wide insight through collaboration and 52 percent of HR and finance professionals believe it will help them become more strategic partners.
  AI to Pave the Way to Greater Collaboration and Better Business Results

HR and finance professionals are looking to emerging technologies like artificial intelligence (AI) to help drive business results:

  • While a quarter (25 percent) of survey respondents are primarily using AI to identify at-risk talent and model their talent pipeline (22 percent), they are rarely using AI to forecast performance (18 percent) or find top talent (15 percent).
  • Over the next year, 71 percent of survey respondents plan to use AI to predict high performing candidates in recruitment and source best-fit candidates with resume analysis (70 percent).
  • Other AI priorities for survey respondents include modeling their talent pipeline (58 percent), flagging at-risk employees through attrition modeling (52 percent) and supporting employee interactions with chatbots (38 percent).
 

“The world of analytics and AI opens tremendous doors for HR to harness meaningful insights in order to make smarter decisions and create a talent advantage,” said Tom Davenport, Babson professor and analytics expert. “Seeing that so many HR professionals are planning to invest heavily in AI over the next year is promising. It means we’ll begin to see more strategic results and businesses competing on an entirely new level to find the right talent.”

Methodology

This survey interviewed 1,510 HR, finance and business professionals in late 2018. The respondents came from a variety of industries and geographies, and all were from companies with US$100 million of revenues or larger.

Contact Info
Celina Bertallee
Oracle
+1.559.283.2425
celina.bertallee@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Celina Bertallee

  • +1.559.283.2425

Oracle ERP Cloud Recognized as the Only Leader in Gartner Magic Quadrant Report

Oracle Press Releases - Tue, 2019-01-29 07:00
Press Release
Oracle ERP Cloud Recognized as the Only Leader in Gartner Magic Quadrant Report Cloud ERP offering recognized for completeness of vision and highest for ability to execute

Redwood Shores, Calif.—Jan 29, 2019

Oracle today announced Oracle ERP Cloud has been named a Leader in Gartner’s Magic Quadrant for Cloud ERP for Product-Centric Midsize Enterprises research report. Out of 14 products evaluated, Oracle ERP Cloud is listed as the only Leader in the Gartner analysis. A complimentary copy of the report is available here.

Gartner notes that “Leaders demonstrate a market-defining vision of how ERP product-centric systems and processes can be supported and improved by moving them to the cloud. They couple this with a clear ability to execute this vision through products, services and go-to-market strategies. They have a strong presence in the market and are growing their revenues and market shares. In the cloud ERP suite market, Leaders show a consistent ability to win deals with organizations of different sizes, and have a good depth of functionality across all areas of operational and administrative ERP. They have multiple proofs of successful deployments by customers based in their home region and elsewhere. Their offerings are frequently used by system integrator partners to support business transformation initiatives.”

“Our focus on creating a complete and innovative financial, operational, manufacturing and supply chain solution delivered natively on the Cloud is helping organizations of all sizes and across all industries drive better business outcomes,” said Rondy Ng, senior vice president, Oracle Applications Development. “We are proud to deliver on our promises for Oracle ERP Cloud and are very pleased to be recognized by Gartner as the sole Leader in this report. We believe this acknowledgement further validates Oracle’s commitment to delivering continuous innovation in our market-leading ERP Cloud for our customers.”

Oracle ERP Cloud includes complete ERP capabilities across Financials, Procurement, and Project Portfolio Management (PPM), as well as Enterprise Performance Management (EPM), and Governance Risk and Compliance (GRC). Together with Supply Chain Management (SCM) and native integration with the broader Oracle SaaS portfolio for Human Capital Management (HCM) and Customer Experience (CX), Oracle offers customers a practical, business-driven, rapid adoption path forward.

Oracle’s broad portfolio of financial management and planning cloud offerings have garnered industry recognition. Oracle ERP Cloud was named a Leader in Gartner’s most recent “Magic Quadrant for Cloud Core Financial Management Suites for Midsize, Large and Global Enterprises”, while Oracle was also positioned in the Leaders quadrant for the 2018 “Magic Quadrant for Cloud Financial Planning and Analysis Solutions” and in the 2018 “Magic Quadrant for Cloud Financial Close Solutions.”

Gartner, Magic Quadrant for Cloud ERP for Product-Centric Midsize Enterprises, Mike Guay, John Van Decker, et al., 31 October 2018. Gartner, Magic Quadrant for Cloud Core Financial Management Suites for Midsize, Large and Global Enterprises, John Van Decker, Robert Anderson, Mike Guay, 29 May 2018. Gartner, Magic Quadrant for Cloud Financial Planning and Analysis Solutions, Christopher Iervolino, John Van Decker, 24 July 2018. Gartner, Magic Quadrant for Cloud Financial Close Solutions, John Van Decker, Christopher Iervolino, 26 July 2018.

Contact Info
Bill Rundle
Oracle PR
+1 650 506 1891
bill.rundle@oracle.com
Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Talk to a Press Contact

Bill Rundle

  • +1 650 506 1891

New Oracle Utilities Opower Cloud Service Enables Utilities to Engage Customers at the Grid Edge

Oracle Press Releases - Tue, 2019-01-29 07:00
Press Release
New Oracle Utilities Opower Cloud Service Enables Utilities to Engage Customers at the Grid Edge Aids utilities in becoming trusted advisors to customers adopting solar and other distributed energy resources

Redwood Shores, Calif.—Jan 29, 2019

As emerging energy sources—such as solar—compel customers to become more active participants in the grid, their energy management needs are getting more complex. Oracle is equipping utilities to serve as trusted advisors to these increasingly active customers with its new Oracle Utilities Opower Distributed Energy Resources Customer Engagement Cloud Service.

Available today, the offering initially provides rooftop solar customers helpful insight into their utility bills and energy savings and will expand to address other types of distributed energy resources (DERs) in the future such as electric vehicles and residential battery storage.

Oracle Utilities Opower DER Customer Engagement is one of four new products and more than 100 new features in the Opower platform, the industry’s leading suite of customer engagement and energy efficiency cloud services. To learn more, visit today’s release here.

According to Wood Mackenzie, “Solar is the second fastest growing resource after natural gas in the U.S., and residential solar has grown by 500+MW every quarter over the last four years.” However, many early adopters have been dismayed with their expectation of utility bill savings versus the reality. This is resulting in an influx of calls to their utility provider, which have proven to be up to $8 more expensive and considerably longer than non-solar related calls.

Based on extensive research into the solar customer journey, Opower DER Customer Engagement addresses these challenges by providing utility customers with a personalized set of insights and recommendations relating to their overall energy generation, usage, and resulting bill.

“Engaging with our customers and providing them clear, consistent information about their energy consumption and production is critical,” said Feltrin Davis, Manager, Business Intelligence and Data Analytics Smart Energy Services, Exelon. “With Oracle, we have been able to regularly deploy new web tools for our solar/net energy meter customers and are updating them frequently to ensure we are providing the best experience possible.”

With Opower, Utilities can now send new solar customers onboarding communications explaining what to expect and how solar billing works. And new or existing customers can leverage online tools and insights to understand their net energy consumption. In addition, solar customers will also have a simple overview of their bills and a comparison of how their energy costs have changed since adopting solar. As a result, customers are happier and utilities reduce expensive call center volume.

“As distributed energy resources continue to rise, consumers are becoming more active and in control of their energy footprint – both as users and producers,” said Dan Byrnes, SVP of product development, Oracle Utilities. “They are looking to their utility to help guide them throughout this journey and provide the clear, accessible insights they need to make more informed decisions. This innovation is a critical step forward in enabling the kind of deeper relationship between utilities and their customers which is essential as the industry moves towards a more customer-centric grid model.”

This offering—the only of its kind on the market—is powered by the world’s largest residential energy data analytics platform with over 1.6 trillion meter reads from more than 60 million households and businesses, across 100 utilities.

Contact Info
Kris Reeves
Oracle Corporation
+1 925 787 6744
kris.reeves@oracle.com
Wendy Wang
H&K Strategies
+1 979 216 8157
wendy.wang@hkstrategies.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1 925 787 6744

Wendy Wang

  • +1 979 216 8157

Oracle Utilities Opower Innovations Help Utilities Connect with Every Customer

Oracle Press Releases - Tue, 2019-01-29 07:00
Press Release
Oracle Utilities Opower Innovations Help Utilities Connect with Every Customer With more than 100 enhancements to the Opower platform, Oracle empowers utilities to better engage customers as they turn to diverse and dynamic energy sources and rate programs

Redwood Shores, Calif.—Jan 29, 2019

The Oracle Utilities Opower customer engagement platform now includes four entirely new experiences and over 100 innovations for utilities to connect with every residential customer. These enhancements deliver measurable results at scale: cost-effective energy savings, satisfied customers, lower service costs, new revenue streams, and more connected homes.

“With the rise in dynamic rates, complex bills, and consumer technology on the grid edge, serving the diverse needs of today’s utility customers is becoming increasingly complex,” said Dan Byrnes, SVP of product development, Oracle Utilities. “Point solutions only serve small pockets of customers. With the latest release of the Opower platform, utilities can connect with every customer to increase engagement and satisfaction, drive down service costs, and balance demand on the grid.”

Since breaking onto the scene more than a decade ago, the Opower platform has continued to stand as the most complete and effective customer engagement platform in the utility industry. This latest release expands Opower even further with four new digital customer experiences. Each new offering leverages machine learning to render actionable energy insights, and experimental program design to deliver measurable results.

“Oracle’s latest improvements to its Opower platform demonstrate the company’s ability to continue distinguishing itself as a leader in home energy management, as well as its commitment to innovation and helping customers achieve savings as scale,” says Paige Leuschner, research analyst at Navigant Research.

The new experiences include:

  • Distributed Energy Resources (DER) Customer Engagement: self-service customer web tools and alerts to help utilities effectively serve customers with distributed energy resources such as solar. To learn more, visit today’s release here.
  • Behavioral Load Shaping: a personalized, digital experience that enables utility customers on dynamic rates to shift their energy use away from daily peaks and control their complex bills.
  • Digital Self Service Transactions: a suite of embeddable web and mobile features for utility customers to start service, pay their bills and manage their accounts, all natively integrated with Oracle Utilities’ market-leading customer information system (CIS).
  • AMI Customer Education Reports: multi-channel reports to educate and engage utility customers during a smart meter rollout, with new insights and recommendations for managing their energy use.
 

This release also includes more than 100 new features to amplify demand side management and customer care results, including:

  • Adaptive Intelligent Recommendations: Opower recommendations learn and adapt to each customer’s disaggregated end use and cross-channel engagement patterns. An expanded library of recommendations can influence over 4 TWh of energy-saving behavior every year and create a halo effect of 15 percent more participation in utility programs.
  • Digital Energy Reports: Email Home Energy Reports have been upgraded with a dozen behavioral insight modules and new configurable content options to make them more dynamic and feature-rich than paper reports.
  • Embedded Web 2.0 Insights: Visual energy insights for utility web and mobile apps now render with more speed, reliability, and styling control than ever before to improve time on site by 25 percent.
  • Unauthenticated Online Audits: Frictionless online audits allow customers to securely access the Opower audit with a single tap or click and potentially double their behavioral energy savings.
  • Dynamic Campaigns: Platform tools to rapidly curate targeted marketing across the Opower experience and drive adoption of utility programs and products—such as connected home devices—at up to a 61 percent higher rate than traditional and digital advertising.
  • Proactive Billing Alerts: With new weather insights, dual fuel support, and customer-set budget goals, these upgraded alerts deliver digital energy savings and reduce high bill calls by up to 22 percent.
  • Connected Home and Enterprise Integrations: Mature APIs and a growing library of integrations into Oracle customer, meter, and marketing applications and third-party connected home platforms make it easy to plug Opower into core utility operations, engage customers through digital assistants, and enable smart home device control.
 

“Customers trust their utility for advice how to: save energy, control their bills, adopt new technologies and use them well. With these new innovations, we’re helping utilities connect it all for their customers,” said Byrnes. “And we’re just getting started.”

Contact Info
Kris Reeves
Oracle Corporation
+1 925 787 6744
kris.reeves@oracle.com
Wendy Wang
H&K Strategies
+1 979 216 8157
wendy.wang@hkstrategies.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1 925 787 6744

Wendy Wang

  • +1 979 216 8157

MINA Group Continues to Redefine the Dining Experience with Oracle

Oracle Press Releases - Tue, 2019-01-29 07:00
Press Release
MINA Group Continues to Redefine the Dining Experience with Oracle Simphony Point of Sale Platform and Oracle MICROS Hardware Power Leading Restaurant Group’s New The Street Food Hall Concept in Hawaii

Redwood Shores, Calif.—Jan 29, 2019

The MINA Group is continually testing the boundaries of restaurants and opportunities to transform the fine dining experience, bringing new flavors to customers across the world. No better example of this philosophy is the group’s innovative The Street Food Hall concept in Waikiki, Hawaii. Powered by the Oracle Food and Beverage Simphony restaurant management platform and Oracle MICROS 600 series point-of-sale devices, the hall brings together 10 globally-inspired food stations, enabling visitors to taste multiple cuisines in a casual, festive setting.

An ambitious project aimed at providing an ever-changing dining experience for guests, MINA Group operates each of these stations separately led by individual chefs with an evolving menu. With the regular launch of new concepts, effectively managing restaurant operations is critical to ensuring a consistent focus on service. Oracle was selected for its ability to manage each station as its own revenue center while aggregating analysis of the entire venue; providing durable hardware that can meet the demands of kitchen environments; and minimizing technology complexity, allowing the creativity of MINA Group and its chefs to shine.

“As operators, we’ve had the opportunity to work with several different POS systems where we weren’t able to be as omnipresent as we’d like to be in order to have an intimate knowledge of what was going in our restaurants,” says Patric Yumul, president MINA Group. “The choice was clear to go with Oracle Simphony Cloud because of their reliability, data, and ability to see how the entire operation is doing and how our marketing efforts are working.”

Oracle Food and Beverage has long collaborated with MINA Group, with the Oracle MICROS point of sale solutions utilized at signature locations including MICHAEL MINA, PABU, RN74, International Smoke and The Mina Test Kitchen. MINA Group continues to implement Oracle solutions for their ability to meet the needs of the unique dining concepts – whether they be iconic fine dining establishments with thousands of wine options, multiple quick service menus within a single establishment, or new restaurants evolved out of The MINA Test Kitchen.

“MINA Group is a true innovator among restaurant management companies with an unprecedented breadth of unique concepts,” said Simon de Montfort Walker, senior vice president and general manager, Oracle Food and Beverage. “With The Street Food Hall in Waikiki, MINA Group is continuing to demonstrate the creative applications of Oracle Food and Beverage solutions. As the MINA Group continues to innovate new dining experiences, Simphony will prove to be a strategic investment that supports culinary creativity, differentiated service and streamlined operations.”

Contact Info
Valerie Beaudett
Oracle
+1.650.400.7833
valerie.beaudett@Oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

For more information about Oracle Food and Beverage, please visit www.Oracle.com/Food-Beverage

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1.650.400.7833

[Video] Oracle Cloud Infra Architect Certification 1Z0-932 Everything (Dumps/ Document/ Mistakes) You Must Know

Online Apps DBA - Tue, 2019-01-29 03:10

Grow your Skills, & choose Certification as your next Step! Watch a Video to start your journey for Oracle Cloud Architect Certification 1Z0-932 & Everything (Dumps/ Document/ Mistakes) You Must Know! Visit: https://k21academy.com/1z093211 to learn all about: ✔ What & Why of Oracle Cloud Architect certification ✔ Who should go for Oracle Cloud Infra Architect […]

The post [Video] Oracle Cloud Infra Architect Certification 1Z0-932 Everything (Dumps/ Document/ Mistakes) You Must Know appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

changing priority of sessions

Tom Kyte - Mon, 2019-01-28 21:06
Hi Tom, a salesman from Oracle told me that i can identify sessions of users that are joining terabyte tables with other terabyte tables and i am able to reduce their priority to prevent other users from suffering from such a kind of queries that ...
Categories: DBA Blogs

nls_date_format

Tom Kyte - Mon, 2019-01-28 21:06
hi Tom, i saw your FAQ regarding this but couldn't find the answer. my initORCL.ora's nls_date_format is being ignored! is there a way to adjust sqlplus so it uses a certain date format always - other than by using a sql statement or a trigg...
Categories: DBA Blogs

system tablespace free 0.01%

Tom Kyte - Mon, 2019-01-28 21:06
I do not have DBA background but I recently inherited an Oracle database on NT that's extensively used for testing. While checking the tablespaces, I noticed that system ts is only 0.01% free. What does this mean and what should I do to have more fre...
Categories: DBA Blogs

Oracle Utilities Customer To Meter V2.7.0.1.0 now available

Anthony Shorten - Mon, 2019-01-28 21:00

Oracle Utilities Customer To Meter V2.7.0.1.0 is now available for download from Oracle Software Delivery Cloud. This release is based upon Oracle Utilities Application Framework V4.4.0.0.0 and contains the following software:

  • Oracle Utilities Application Framework V4.4.0.0.0
  • Oracle Customer Care and Billing V2.7.0.1.0
  • Oracle Meter Data Management V2.3.0.0.0 including SGG, SOM and Settlements
  • Oracle Work and Asset Management V2.2.0.3.0 (also known as Operational Device Management)

Refer to the release notes provided with this product and related products for a full list of changes and new functionality in this release.

Oracle Applications Unlimited: Investing in Growth and Innovation

Chris Warticki - Mon, 2019-01-28 18:33

Your applications should grow with your business. Applications Unlimited provides new customer-driven features and avoids application upgrades—giving you more time to focus on your business.

Applications Unlimited is Oracle’s commitment to continuous innovation while also providing a commitment to offer Oracle Premier Support through at least 2030 for

  • Oracle E-Business Suite
  • JD Edwards EnterpriseOne
  • PeopleSoft
  • Siebel

Investing in Continuous Innovation

Applications Unlimited provides you with a transparent roadmap, new features that do not require application upgrades, and an ongoing investment in research and development.

At the heart of this is a continuous commitment to our customers, our products, and to innovation—Applications Unlimited takes us forward into our shared future.

Explore the Benefits of Applications Unlimited

Oracle’s industry-leading commitment through at least 2030 helps you to maximize the value of your current Oracle investment and plan for your company’s future. You can find the details for your Applications Unlimited products on the datasheet and in the Lifetime Support Policy for Oracle Applications.

As an Applications Unlimited customer, we will be there to help support your company’s plans. Your long-term business strategy guides your company to success—underpinning that success are the products you run your business on.

Learn more about the Applications Unlimited products:

The best free database... Google is wrong!

Dimitri Gielis - Mon, 2019-01-28 12:23
When you search on Google for "the best free database", the below is what you get (search done 11-NOV-2018 and again on 12-JAN-2019 and 28-JAN-2019). To my surprise, there's no Oracle on the list? The reason is Google took the answer from this review. As I don't want people to see this screenshot, I put in red what is wrong with the answer, so in case Google shows images, and people don't read this blog post, they don't get the wrong answer ;)

The above is so wrong for me, the best free database in the world is Oracle Express Edition (XE). Oracle released XE18c on October 19th, 2018. This database is unbelievable. You basically get an Enterprise Edition version and almost all options are turned on! It's amazing, the only restrictions you have is on the amount of RAM (2GB) and disk space (12GB). You even have the pluggable database architecture and can create 3 PDBs (pluggable databases).
In my opinion, there is no other free database in the world that will beat this. Below I will go in more detail why I like this Oracle XE18c so much, but first, let me show you Google actually knows the right answer too.

Google says "People also ask": "What are the top 5 databases available on the market?" and here Oracle is number one. The other question is "What is the best database software for small businesses?" Oracle number one again. If the question would be "What is the best database software for enterprise businesses?" Oracle is number one too, this is common knowledge.


Google's algorithm to answer the first question, just got it wrong. How can Oracle be number one and be the best, but not in the free section, whereas their best database is available for free? :)

Google allows you to comment on their search results, which I did:



Why do I like Oracle XE 18c so much?

When we talk about Oracle XE, we really talk about the full Oracle database in general. Yes, there are a couple of limitations, but nevertheless, you get the full feature set of the Oracle database! All the good stuff why Oracle shines is there: to have the best performance you can use partitioning and online index rebuilds (and in the future automatic index creation!), to increase high availability you have the full flashback technology to your disposal, for security Oracle has VPD, Real Application Security, Database Vault... Oracle plans to release a new version of XE every year too, so you have always the latest and greatest.

I should write another blog post why I like the Oracle database so much, but I encourage you to just try it and decide yourself.

Getting started with Oracle XE

If you just want to try Oracle XE, most likely the easiest way without hitting your system, is to go with the Oracle docker container. Here're the steps to get Oracle XE running in an Oracle docker container.

If you don't have Docker and Git yet, download and install first.

Open a Terminal or Command Prompt and run following commands:

git clone https://github.com/oracle/docker-images.git

cd docker-images/OracleDatabase/SingleInstance/dockerfiles

Download Oracle XE
Copy the oracle-database-xe-18c-1.0-1.x86_64.rpm in the docker-images/OracleDatabase/SingleInstance/dockerfiles/18.4.0 directory

Move on by running following command:

 ./buildDockerImage.sh -v 18.4.0 -x -i



docker images

docker run --name OracleXE -p 1521:1521 -p 8080:8080 -e ORACLE_SID=XE -e ORACLE_PWD=oracle -v /Users/dgielis/git/docker-images/OracleDatabase/SingleInstance/dockerfiles:/opt/oracle/oradata oracle/database:18.4.0-xe

    ...

And voila, you are done! (more info on the Oracle Docker images)

To get a view in your database you can use Oracle SQL Developer. Here's how you connect to it:



The next thing would be to install Oracle APEX, so you don't only have the best database in the world, but also the best low code platform in the world, which works absolutely awesome with the Oracle database.
Categories: Development

Code Merge as part of a Build Pipeline in Oracle Developer Cloud

OTN TechBlog - Mon, 2019-01-28 12:11

This blog will help you understand how to use code merge as part of a build pipeline in Oracle Developer Cloud. You’ll use out-of-the-box build job functionality only. This information should also help you see how useful this feature can be for developers in their day-to-day development work.

Creating a New Git Repository

Click Project Home in the navigation bar. In the Project page, select a project to use (I chose DemoProject), and then click the + Create Repository button to create a new repository. I’ll use this repository for the code merge in this blog.

In the New Repository dialog box, enter a name for the repository. I used MyMergeRepo, but you can use whatever name you want. Then, select the Initialize repository with README file option and click the Create button.

Creating the New Branch

Click Git in the navigation bar. In the Refs view of the Git page, from the Repositories drop-down list, select MyMergeRepo.git. Click on the + New Branch button to create a new branch.

In the New Branch dialog, enter a unique branch name. I used change, but you can use any name you want. Select the appropriate Base branch from the drop-down list. For this repository, master branch is the only option we have. Click the Create button to create the new branch.

 

Creating the Build Job Configuration

In the navigation bar, click Builds. In the Jobs tab, click on the + Create Job button to create a new build job.

 

In the New Job dialog, enter a unique name for the job name. I’ll use MergeCode but you can enter any name you want. Select the Use for Merge Request checkbox, the Create New option, and then select any Software Template from the drop-down list. You don’t need a specific software bundle to execute a merge. The required software bundle, which by default is part of any software template you create, is sufficient. Finally, click the Create Job button.

Note: If you are new to creating Build VM templates and Build VMs, see Set Up the Build System.

 

When you create a build job with the Use for Merge Request checkbox selected, the Merge Request Parameters get placed in the Repository and Branch fields of the Source Control tab. You can go ahead and select the Automatically perform build on SCM commit checkbox.

In the Build Parameters tab, you’ll notice that Merge Request parameters like GIT_REPO_URL, GIT_REPO_BRANCH, and MERGE_REQ_ID were added automatically. After reviewing it, click on the Save button.

 

Creating the Merge Request

In the navigation bar, click Merge Requests.  Then click on the + Create Merge Request button.

In the New Merge Request wizard, select the Git repository (MyMergeRepo), the target branch (master), and the review branch (change). You won’t see any commits because we haven’t done any yet. Click the Next button to advance to the second page.

On the Details page, select MergeCode for Linked Builds and select a reviewer. If you created an issue that needs to be linked to the merge request, link it with Linked Issues. Click the Next button to advance to the last page.

You can change the description for the merge request or just use the default one. Then click the Create button to create the merge request.

In the Linked Builds tab, you should see the MergeCode build job as the linked build job.

 

Changing a File and Committing the Change to the Git Repository

In the Git page, select the MyMergeRepo.git repository from the repository drop-down list and the change branch in the branches drop-down list. Then click the README.md file link.

Click the pencil icon to edit the file.

Add some text (any text will do), and then click the Commit button.

 

The code commit triggers the MergeCode build.

 

When a build of a linked job runs, a comment is automatically added to the Conversation tab. When the MergeCode build completes successfully, it auto-approves the merge request and adds itself to the Approve section of the Review Status list, waiting for an approval from the reviewer assigned to the merge request.

Once the reviewer approves the merge request, the review branch code is ready to be merged into the target branch. To merge the code, click the Merge button.

Note: For this merge request, Alex Admin, the user who is logged in, is the reviewer.

 

By including Merge Request Parameters as part of a build job, you can be sure that every commit will be auto-validated to be free of conflicts and approved. This comes in handy when multiple commits are linked to a merge request by linking the build job enabled for merge requests. The merge request will still wait for the assigned reviewer(s) to do the code review, approve the changes, and then merge the code in the target branch.

This feature helps developers collaborate efficiently with their team members in their day-to day-development activities.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Video : JSON Data Guide

Tim Hall - Mon, 2019-01-28 04:17

Today’s video is an overview of the JSON Data Guide functionality introduced in Oracle 12.2.

If videos aren’t your thing, you can read the articles instead. This video focuses on the main features that were introduced in 12.2, but there are some nice additions in 18c also.

The cameo in today’s video is Toon Koppelaars of #SMartDB fame.

Cheers

Tim…

PS. Subscribe to my YouTube channel here.

Video : JSON Data Guide was first posted on January 28, 2019 at 11:17 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

JRE 1.8.0_201/202 Certified with Oracle EBS 12.1 and 12.2

Steven Chan - Mon, 2019-01-28 02:15

Java logo

Java Runtime Environment 1.8.0_201/202 (a.k.a. JRE 8u201-b9) and 1.8.0_202 (JRE 8u202-b9) and later updates on the JRE 8 codeline are now certified with Oracle E-Business Suite 12.1 and 12.2 for Windows clients.

Java Web Start is available

This JRE release may be run with either the Java plug-in or Java Web Start.

Java Web Start is certified with EBS 12.1 and 12.2 for Windows clients.  

Considerations if you're also running JRE 1.6 or 1.7

JRE 1.7 and JRE 1.6 updates included an important change: the Java deployment technology (i.e. the JRE browser plugin) is no longer available for those Java releases. It is expected that Java deployment technology will not be packaged in later Java 6 or 7 updates.

JRE 1.7.0_161 (and later 1.7 updates) and 1.6.0_171 (and later 1.6 updates) can still run Java content.  They cannot launch Java.

End-users who only have JRE 1.7 or JRE 1.6 -- and not JRE 1.8 -- installed on their Windows desktop will be unable to launch Java content.

End-users who need to launch JRE 1.7 or 1.6 for compatibility with other third-party Java applications must also install the JRE 1.8.0_152 or later JRE 1.8 updates on their desktops.

Once JRE 1.8.0_152 or later JRE 1.8 updates are installed on a Windows desktop, it can be used to launch JRE 1.7 and JRE 1.6. 

How do I get help with this change?

EBS customers requiring assistance with this change to Java deployment technology can log a Service Request for assistance from the Java Support group.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

All patches required for ensuring full compatibility of the E-Business Suite with JRE 8 are documented in these Notes:

For EBS 12.1 & 12.2

Implications of Java 6 and 7 End of Public Updates for EBS Users

The Oracle Java SE Support Roadmap and Oracle Lifetime Support Policy for Oracle Fusion Middleware documents explain the dates and policies governing Oracle's Java Support.  The client-side Java technology (Java Runtime Environment / JRE) is now referred to as Java SE Deployment Technology in these documents.

Starting with Java 7, Extended Support is not available for Java SE Deployment Technology.  It is more important than ever for you to stay current with new JRE versions.

If you are currently running JRE 6 on your EBS desktops:

  • You can continue to do so until the end of Java SE 6 Deployment Technology Extended Support in June 2017
  • You can obtain JRE 6 updates from My Oracle Support.  See:

If you are currently running JRE 7 on your EBS desktops:

  • You can continue to do so until the end of Java SE 7 Deployment Technology Premier Support in October 2017.
  • You can obtain JRE 7 updates from My Oracle Support.  See:

If you are currently running JRE 8 on your EBS desktops:

  • You can continue to do so until the end of Java SE 8 Deployment Technology Premier Support in March 2019
  • You can obtain JRE 8 updates from the Java SE download site or from My Oracle Support. See:

Will EBS users be forced to upgrade to JRE 8 for Windows desktop clients?

No.

This upgrade is highly recommended but remains optional while Java 6 and 7 are covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 and 7 desktop clients. Note that there are different impacts of enabling JRE Auto-Update depending on your current JRE release installed, despite the availability of ongoing support for JRE 6 and 7 for EBS customers; see the next section below.

Impact of enabling JRE Auto-Update

Java Auto-Update is a feature that keeps desktops up-to-date with the latest Java release.  The Java Auto-Update feature connects to java.com at a scheduled time and checks to see if there is an update available.

Enabling the JRE Auto-Update feature on desktops with JRE 6 installed will have no effect.

With the release of the January Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Enabling the JRE Auto-Update feature on desktops with JRE 8 installed will apply JRE 8 updates.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

Mac users running Mac OS X 10.12 (High Sierra) can run JRE 8 plug-ins.  See:

Will EBS users be forced to upgrade to JDK 8 for EBS application tier servers?

No.

JRE is used for desktop clients.  JDK is used for application tier servers.

JRE 8 desktop clients can connect to EBS environments running JDK 6 or 7.

JDK 8 is not certified with the E-Business Suite.  EBS customers should continue to run EBS servers on JDK 6 or 7.

Known Issues

Internet Explorer Performance Issue

Launching JRE 1.8.0_73 through Internet Explorer will have a delay of around 20 seconds before the applet starts to load (Java Console will come up if enabled).

This issue fixed in JRE 1.8.0_74.  Internet Explorer users are recommended to uptake this version of JRE 8.

Form Focus Issue Clicking outside the frame during forms launch may cause a loss of focus when running with JRE 8 and can occur in all Oracle E-Business Suite releases. To fix this issue, apply the following patch:

References

Related Articles
Categories: APPS Blogs

Documentum – MigrationUtil – 1 – Change Docbase ID

Yann Neuhaus - Mon, 2019-01-28 02:04

This blog is the first one of a series that I will publish in the next few days/weeks regarding how to change a Docbase ID, Docbase name, aso in Documentum CS.
So, let’s dig in with the first one: Docbase ID. I did it on Documentum CS 16.4 with Oracle database on a freshly installed docbase.

We will be interested by the docbase repo1, to change the docbase ID from 101066 (18aca) to 101077 (18ad5).

1. Migration tool overview and preparation

The tool we will use here is MigrationUtil, and the concerned folder is:

[dmadmin@vmtestdctm01 ~]$ ls -rtl $DM_HOME/install/external_apps/MigrationUtil
total 108
-rwxr-xr-x 1 dmadmin dmadmin 99513 Oct 28 23:55 MigrationUtil.jar
-rwxr-xr-x 1 dmadmin dmadmin   156 Jan 19 11:09 MigrationUtil.sh
-rwxr-xr-x 1 dmadmin dmadmin  2033 Jan 19 11:15 config.xml

The default content of MigrationUtil.sh:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh
#!/bin/sh
CLASSPATH=${CLASSPATH}:MigrationUtil.jar
export CLASSPATH
java -cp "${CLASSPATH}" MigrationUtil

Update it if you need to overload the CLASSPATH only during migration. It was my case, I had to add the oracle driver path to the $CLASSPATH, because I received the below error:

...
ERROR...oracle.jdbc.driver.OracleDriver
ERROR...Database connection failed.
Skipping changes for docbase: repo1

To make the blog more readable, I will not show you all the contents of config.xml, below is the updated version to change the Docbase ID:

...
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">repo1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

<entry key="ChangeDocbaseID">yes</entry> <!-- To change docbase ID or not -->
<entry key="Docbase_name">repo1</entry> <!-- has to match with DocbaseName.1 -->
<entry key="NewDocbaseID">101077</entry> <!-- New docbase ID -->
...

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

Before you start the migration script, you have to adapt the maximum open cursors in the database. In my case, with a freshly installed docbase, I had to set open_cursors value to 1000 (instead of 300):

alter system set open_cursors = 1000

See with your DB Administrator before any change.

Otherwise, I got below error:

...
Changing Docbase ID...
Database owner password is read from config.xml
java.sql.SQLException: ORA-01000: maximum open cursors exceeded
	at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450)
	at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399)
	at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1059)
	at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:522)
	at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
	at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
	at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:943)
	at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1150)
	at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4798)
	at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:4875)
	at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1361)
	at SQLUtilHelper.setSQL(SQLUtilHelper.java:129)
	at SQLUtilHelper.processColumns(SQLUtilHelper.java:543)
	at SQLUtilHelper.processTables(SQLUtilHelper.java:478)
	at SQLUtilHelper.updateDocbaseId(SQLUtilHelper.java:333)
	at DocbaseIDUtil.(DocbaseIDUtil.java:61)
	at MigrationUtil.main(MigrationUtil.java:25)
...
2. Before the migration (optional)

Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : repo1
Docbase id          : 101066
Docbase description : repo1 repository
...

Create a document in the docbase
Create an empty file

touch /home/dmadmin/DCTMChangeDocbaseExample.txt

Create document in the repository using idql

create dm_document object
SET title = 'DCTM Change Docbase Document Example',
SET subject = 'DCTM Change Docbase Document Example',
set object_name = 'DCTMChangeDocbaseExample.txt',
SETFILE '/home/dmadmin/DCTMChangeDocbaseExample.txt' with CONTENT_FORMAT= 'msww';

Result:

object_created  
----------------
09018aca8000111b
(1 row affected)

note the r_object_id

3. Execute the migration

Before you execute the migration you have to stop the docbase and the docbroker.

$DOCUMENTUM/dba/dm_shutdown_repo1
$DOCUMENTUM/dba/dm_stop_DocBroker

Now, you can execute the migration script:

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Created log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseIdChange.log
Changing Docbase ID...
Database owner password is read from config.xml
Finished changing Docbase ID...

Skipping Host Name Change...
Skipping Install Owner Change...
Skipping Server Name Change...
Skipping Docbase Name Change...
Skipping Docker Seamless Upgrade scenario...

Migration Utility completed.

No Error, sounds good ;) All changes have been recorded in the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseIdChange.log
Reading config.xml from path: config.xmlReading server.ini parameters

Retrieving server.ini path for docbase: repo1
Found path: /app/dctm/product/16.4/dba/config/repo1/server.ini
Set the following properties:

Docbase Name:repo1
Docbase ID:101066
New Docbase ID:101077
DBMS: oracle
DatabaseName: DCTMDB
SchemaOwner: repo1
ServerName: vmtestdctm01
PortNumber: 1521
DatabaseOwner: repo1
-------- Oracle JDBC Connection Testing ------
jdbc:oracle:thin:@vmtestdctm01:1521:DCTMDB
Connected to database
Utility is going to modify Objects with new docbase ID
Sun Jan 27 19:08:58 CET 2019
-----------------------------------------------------------
Processing tables containing r_object_id column
-----------------------------------------------------------
-------- Oracle JDBC Connection Testing ------
jdbc:oracle:thin:@vmtestdctm01:1521:DCTMDB
Connected to database
...
...
-----------------------------------------------------------
Update the object IDs of the Table: DMC_ACT_GROUP_INSTANCE_R with new docbase ID:18ad5
-----------------------------------------------------------
Processing objectID columns
-----------------------------------------------------------
Getting all ID columns from database
-----------------------------------------------------------

Processing ID columns in each documentum table

Column Name: R_OBJECT_ID
Update the ObjectId columns of the Table: with new docbase ID

Processing ID columns in each documentum table

Column Name: R_OBJECT_ID
Update the ObjectId columns of the Table: with new docbase ID
...
...
-----------------------------------------------------------
Update the object IDs of the Table: DM_XML_ZONE_S with new docbase ID:18ad5
-----------------------------------------------------------
Processing objectID columns
-----------------------------------------------------------
Getting all ID columns from database
-----------------------------------------------------------
Processing ID columns in each documentum table
Column Name: R_OBJECT_ID
Update the ObjectId columns of the Table: with new docbase ID
-----------------------------------------------------------
Updating r_docbase_id of dm_docbase_config_s and dm_docbaseid_map_s...
update dm_docbase_config_s set r_docbase_id = 101077 where r_docbase_id = 101066
update dm_docbaseid_map_s set r_docbase_id = 101077 where r_docbase_id = 101066
Finished updating database values...
-----------------------------------------------------------
-----------------------------------------------------------
Updating the new DocbaseID value in dmi_vstamp_s table
...
...
Updating Data folder...
select file_system_path from dm_location_s where r_object_id in (select r_object_id from dm_sysobject_s where r_object_type = 'dm_location' and object_name in (select root from dm_filestore_s))
Renamed '/app/dctm/product/16.4/data/repo1/replica_content_storage_01/00018aca' to '/app/dctm/product/16.4/data/repo1/replica_content_storage_01/00018ad5
Renamed '/app/dctm/product/16.4/data/repo1/replicate_temp_store/00018aca' to '/app/dctm/product/16.4/data/repo1/replicate_temp_store/00018ad5
Renamed '/app/dctm/product/16.4/data/repo1/streaming_storage_01/00018aca' to '/app/dctm/product/16.4/data/repo1/streaming_storage_01/00018ad5
Renamed '/app/dctm/product/16.4/data/repo1/content_storage_01/00018aca' to '/app/dctm/product/16.4/data/repo1/content_storage_01/00018ad5
Renamed '/app/dctm/product/16.4/data/repo1/thumbnail_storage_01/00018aca' to '/app/dctm/product/16.4/data/repo1/thumbnail_storage_01/00018ad5
select file_system_path from dm_location_s where r_object_id in (select r_object_id from dm_sysobject_s where r_object_type = 'dm_location' and object_name in (select log_location from dm_server_config_s))
Renamed '/app/dctm/product/16.4/dba/log/00018aca' to '/app/dctm/product/16.4/dba/log/00018ad5
select r_object_id from dm_ldap_config_s
Finished updating folders...
-----------------------------------------------------------
-----------------------------------------------------------
Updating the server.ini with new docbase ID
-----------------------------------------------------------
Retrieving server.ini path for docbase: repo1
Found path: /app/dctm/product/16.4/dba/config/repo1/server.ini
Backed up '/app/dctm/product/16.4/dba/config/repo1/server.ini' to '/app/dctm/product/16.4/dba/config/repo1/server.ini_docbaseid_backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/repo1/server.ini
Docbase ID Migration Utility completed!!!
Sun Jan 27 19:09:52 CET 2019

Start the Docbroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repo1
4. After the migration (optional)

Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : repo1
Docbase id          : 101077
Docbase description : repo1 repository
...

Check the document created before the migration:
Adapt the r_object_id with the new docbase id : 09018ad58000111b

API> dump,c,09018ad58000111b    
...
USER ATTRIBUTES
  object_name                     : DCTMChangeDocbaseExample.txt
  title                           : DCTM Change Docbase Document Example
  subject                         : DCTM Change Docbase Document Example
...
  r_object_id                     : 09018ad58000111b
...
  i_folder_id                  [0]: 0c018ad580000105
  i_contents_id                   : 06018ad58000050c
  i_cabinet_id                    : 0c018ad580000105
  i_antecedent_id                 : 0000000000000000
  i_chronicle_id                  : 09018ad58000111b
5. Conclusion

After a lot of tests on my VMs, I can say that changing docbase id is reliable on a freshly installed docbase. On the other hand, each time I tried it on a “used” Docbase, I got errors like:

Changing Docbase ID...
Database owner password is read from config.xml
java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (GREPO5.D_1F00272480000139) violated

	at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450)
	at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399)
	at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1059)
	at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:522)
	at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
	at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
	at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:943)
	at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1150)
	at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4798)
	at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:4875)
	at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1361)
	at SQLUtilHelper.setSQL(SQLUtilHelper.java:129)
	at SQLUtilHelper.processColumns(SQLUtilHelper.java:543)
	at SQLUtilHelper.processTables(SQLUtilHelper.java:478)
	at SQLUtilHelper.updateDocbaseId(SQLUtilHelper.java:333)
	at DocbaseIDUtil.(DocbaseIDUtil.java:61)
	at MigrationUtil.main(MigrationUtil.java:25)

I didn’t investigate enough on above error, it deserves more time but it wasn’t my priority. Anyway, the tool made a correct rollback.

Now, it is your turn to practice, don’t hesitate to comment this blog to share your own experience and opinion :)
In the next blog, I will try to change the docbase name.

Cet article Documentum – MigrationUtil – 1 – Change Docbase ID est apparu en premier sur Blog dbi services.

Pi in a time of Brexit – Remote Controlling Raspberry Pi from Ubuntu using VNC

The Anti-Kyte - Sun, 2019-01-27 14:52

What with Larry the Downing Street Cat and Palmerston, his counterpart at the Foreign Office, Teddy suspects he knows the real reason for the Country’s current travails.
Here he is, doing his best Boris Johnson impression :

“No wonder Brexit’s a Cat-astrophe !”

In an attempt to distract myself from the prospect of the country being ruined by this feline consipracy, I’ve been playing with my new Raspberry Pi Model 3 B-spec.
At some point, I’m going to want to connect remotely to the Desktop on the Pi. What follows is how I can do this using VNC…

Why VNC ?

Regular readers (hello Mum!) may be wondering why I’m returning to this topic, having previously used RDP to remote to a Pi.

Well, this newer model RaspberryPi is running Raspbian Stretch ( or version 9) as opposed to the older machine, which was running Jessie (version 8).
Stretch has VNC included by default so it makes sense to use this protocol for connecting to the desktop remotely.

Now, the more observant among you will notice that you can simply and easily enable VNC in the same way as you can enable SSH during initial setup.
You can see this option in the Preferences/Raspberry Pi Configuration menu when you click on the Interfaces tab :


If, like me, you don’t discover that this is the way to go until after you’ve put away all those old peripherals you had to dig out of the attic to setup your Pi then fear not, you can also do this from the command line…

On the Pi

First of all, we want to make sure that we do, in fact, have the required VNC software on the Pi.
So, once I’ve connected to the Pi via SSH, I can run this from the command line :

apt list realvnc*

…which should come back with :

Now we want to configure VNC on the pi so, on the command line we need to enter …

sudo raspi-config

This will bring up the Software Configuration Tool screen below.
Using the arrow keys on the keyboard, navigate to the line that starts 5 Interface Options and hit the [Enter] key.


…which brings up a sub-menu. Here, you need to navigate to P3 VNC and hit [Enter]


…and [Enter] again to confirm you want to enable VNC…

…before you receive a message confirming that VNC is now enabled :

To exit, hit [Enter]

I’m not sure if it’s strictly necessary, but at this point, I re-started the pi by entering :

sudo reboot
In Ubuntu

Meanwhile, on the Ubuntu machinea (I’m running Ubuntu 16.04), we need to head over to the VNC Viewer download site.
As I’m on a 64-bit version of Ubuntu, I chose the DEB x64 version to download.

Incidentally, you can tell if you’re running a 32-bit or 64-bit Linux distro, you can run :

uname -i

If this returns x86_64 the you’re on a 64-bit platform.

Anyhow, when prompted, I opted to open the downloaded file – VNC-Viewer-6.19.107-Linux-x64.deb with Software Install


…which results in…


Now we simply click Install and enter our password when prompted.

Once the installation is completed we’re ready to connect remotely.

Running VNC Viewer

To start the viewer, you can simply open a Terminal and run :

vncviewer

After you’ve accepted the licence, enter the address of the server to connect to (in my case pithree) :

You’ll then be prompted to enter the username and password of a user on the Pi :


Press OK and…


You can tweak the display to make it a bit more practical.
In the VNC Window, move the cursor to the top of the screen so that the Viewer menu slides down then select the cog-wheel icon (second from the right) :

In the Options tab, set Picture Quality to High and Scaling to Scale to fit window :

After this, the VNC viewport should scale to the size of the VNC window itself.

Now all I need is to something else to distract myself from the ongoing battle between Project Fear and Project Farce.

PostgreSQL: When wal_level to logical

Yann Neuhaus - Sun, 2019-01-27 04:50

wal_level determines the quantity of information written to the WAL. With PostgreSQL 11 the parameter wal_level can have 3 values:
-minimal : only information needed to recover from a crash or an immediate shutdown
-replica : enough data to support WAL archiving and replication
-logical : enough information to support logical decoding.

If we want to use logical decoding, wal_level should be set to logical. Logical decoding is the process of extracting all persistent changes to a database’s tables into a coherent, easy to understand format which can be interpreted without detailed knowledge of the database’s internal state.
In PostgreSQL, logical decoding is implemented by decoding the contents of the write-ahead log, which describe changes on a storage level, into an application-specific form such as a stream of tuples or SQL statements.

In this blog we are going to see some easy examples which will allow us to better understand this concept.

Before we can use logical decoding the parameter wal_level should be set to logical. As we will create replications slots, the parameter max_replication_slots should also be at least 1.
Below our values for these parameters

postgres=# show max_replication_slots ;
 max_replication_slots
-----------------------
 10
(1 row)

postgres=# show wal_level ;
 wal_level
-----------
 logical
(1 row)

postgres=#

First let’s create a slot. For this we will use the function pg_create_logical_replication_slot()

postgres=# SELECT * FROM pg_create_logical_replication_slot('my_slot', 'test_decoding');
 slot_name |    lsn
-----------+-----------
 my_slot   | 0/702B658
(1 row)

postgres=#

To inspect the changes at WAL level we can use the function pg_logical_slot_get_changes(). So let’s call this function

postgres=# SELECT * FROM pg_logical_slot_get_changes('my_slot', NULL, NULL);
 lsn | xid | data
-----+-----+------
(0 rows)

postgres=#

This above output is expected because there is no change yet in our database
Now let’s do some insert in the database and let’s call again the function pg_logical_slot_get_changes()

postgres=# begin;
BEGIN
postgres=# insert into mytab values (1,'t1');
INSERT 0 1
postgres=# insert into mytab values (2,'t2');
INSERT 0 1
postgres=# commit;
COMMIT

postgres=# SELECT * FROM pg_logical_slot_get_changes('my_slot', NULL, NULL);
    lsn    | xid |                                  data
-----------+-----+------------------------------------------------------------------------
 0/703F538 | 582 | BEGIN 582
 0/703F538 | 582 | table public.mytab: INSERT: id[integer]:1 name[character varying]:'t1'
 0/703F5B0 | 582 | table public.mytab: INSERT: id[integer]:2 name[character varying]:'t2'
 0/703F620 | 582 | COMMIT 582
(4 rows)

postgres=#

As expected we can see changes that were made.
Now what happen if we call again the same function?

postgres=# SELECT * FROM pg_logical_slot_get_changes('my_slot', NULL, NULL);
 lsn | xid | data
-----+-----+------
(0 rows)

postgres=#

The changes are no longer reported. It’s normal because with the function pg_logical_slot_get_changes(), changes are consumed (will not be returned again). If we want the changes not to be consumed we can use the function pg_logical_slot_peek_changes(). This function behaves like the first one, except that changes are not consumed; that is, they will be returned again on future calls.

postgres=# begin;
BEGIN
postgres=# insert into mytab values (3,'t3');
INSERT 0 1
postgres=# insert into mytab values (4,'t4');
INSERT 0 1
postgres=# commit;
COMMIT                   
postgres=# delete from mytab where id=1;
DELETE 1

postgres=# SELECT * FROM pg_logical_slot_peek_changes('my_slot', NULL, NULL);
    lsn    | xid |                                  data
-----------+-----+------------------------------------------------------------------------
 0/703F738 | 583 | BEGIN 583
 0/703F738 | 583 | table public.mytab: INSERT: id[integer]:3 name[character varying]:'t3'
 0/703F838 | 583 | table public.mytab: INSERT: id[integer]:4 name[character varying]:'t4'
 0/703F8A8 | 583 | COMMIT 583
 0/703F8E0 | 584 | BEGIN 584
 0/703F8E0 | 584 | table public.mytab: DELETE: (no-tuple-data)
 0/703F948 | 584 | COMMIT 584
(7 rows)

postgres=# SELECT * FROM pg_logical_slot_peek_changes('my_slot', NULL, NULL);
    lsn    | xid |                                  data
-----------+-----+------------------------------------------------------------------------
 0/703F738 | 583 | BEGIN 583
 0/703F738 | 583 | table public.mytab: INSERT: id[integer]:3 name[character varying]:'t3'
 0/703F838 | 583 | table public.mytab: INSERT: id[integer]:4 name[character varying]:'t4'
 0/703F8A8 | 583 | COMMIT 583
 0/703F8E0 | 584 | BEGIN 584
 0/703F8E0 | 584 | table public.mytab: DELETE: (no-tuple-data)
 0/703F948 | 584 | COMMIT 584
(7 rows)

postgres=#

Logical decoding can also be managed using pg_recvlogical included in the PostgreSQL distribution.
Let’s create a slot using pg_recvlogical

[postgres@dbi-pg-essentials_3 PG1]$ pg_recvlogical -d postgres --slot=myslot_2  --create-slot

And let’s start the streaming in a first terminal

[postgres@dbi-pg-essentials_3 PG1]$ pg_recvlogical -d postgres --slot=myslot_2  --start -f -

If we do an insert in the database from a second terminal

postgres=# insert into mytab values (9,'t9');
INSERT 0 1
postgres=#

We will see following in the first terminal

[postgres@dbi-pg-essentials_3 PG1]$ pg_recvlogical -d postgres --slot=myslot_2  --start -f -   
BEGIN 587
table public.mytab: INSERT: id[integer]:9 name[character varying]:'t9'
COMMIT 587
Conclusion

In this blog we have seen that if we want to do logical decoding, we have to set the parameter wal_level to logical. Be aware that setting wal_level to logical can increase the volume of generated WAL. If we just want replication or archiving WALs, the value replica is enough with PostgreSQL.

Cet article PostgreSQL: When wal_level to logical est apparu en premier sur Blog dbi services.

Microsoft Azure: First steps (create an account and logging in)

Dietrich Schroff - Sun, 2019-01-27 02:36
After doing a lot of things with amazon webserivces (AWS) i decided to take a look an Microsoft Azure.

Starting point is azure.microsoft.com


I just clicked on "Start free" and was asked to login with any already existing microsoft account (live.com, etc). In the registration process you have to provide a phone number for verification (by call or by message) and             then your credit card information. 
No worries, microsoft offers a start with 170$ credit:


After the registration you can attend to an online course,
 but i decided to move onto the portal:
And here we go:
 Looks like the settings manager from XFCE (a linux desktop manager) ;-)

I clicked on virtual machines and of course there are listed no machines but a big blue button for adding VMs.
 But this will be topic of another posting...

Search Form in Oracle Visual Builder based on ADF BC REST

Andrejus Baranovski - Sat, 2019-01-26 05:14
Oracle Visual Builder supports ADF BC REST out of the box. Build service connection using "Define by Specification" wizard:


Wizards support ADF as API type. Add describe at the end of the REST URL, this will bring metadata for exposed ADF BC REST service (information about attribute types, etc.):


List of endpoints will be populated automatically. You could select all endpoints to be supported for your connection or select only few:


The most typical thing you would do with endpoint - map it with the table to display collection data. You would drag and drop Oracle JET table into VBCS page and choose Add Data option to map it with the service connection:


In the wizard you would select previously defined service connection:


There is a way to switch wizard to detailed view and choose from multiple endpoints available for the connection:


In the next step, you would select service attributes to be displayed in table columns. All declarative, sweet:


In Visual Builder at any point you can quickly test application, it will load in separate browser tab (or you could switch app to Live mode and test page functionality directly in VBCS window):


Every action in Visual Builder is handled through events. For example, this event is mapped with Reset button (you can see it in structure tab on the left):


At any point, you can switch to source view and check (or edit) HTML/JET code which is generated for you by Visual Builder. So cool, imagine typing and copy-pasting all this text by hand, tough and time-consuming (you could do better things in your life than copy-pasting HTML code):


Let's explain how search form logic is done in this sample. I have defined page scope variable type, this type would hold search attribute name, type and operation:


Create as many variables based on this type, as many search criteria items you will expect to have. Make sure to provide attribute and operation names (leave value property empty, this will be assigned by user):


Map search form fields with variables:


Create an event for Search button, which calls search action chain:


In action chain we can define search logic. Before executing search criteria, we need to prepare search criteria array (normally this step could be skipped, but there is issue in current Visual Builder, it fails to execute criteria search, when at least one of the criteria items empty). Calling custom JavaScript function where search criteria array will be prepared:


Custom JavaScript function, it helps to prepare array to be based to criteria (if search item is not set, we are assigning empty value):


Result of the function is mapped with service connection criteria, search will be executed automatically:


Table pagination is handled automatically too. Make sure to specify scroll policy = loadMoreOnScroll and define fetch size:


Resources:

1. Sample source code on my GitHub
2. Blog from Shay - Filtering Data Providers with Compound Conditions in Visual Builder
3. Blog from Shay - Oracle JET UI on Top of Oracle ADF With Visual Builder
4. My previous post about query logic in Visual Builder - Query Logic Implementation in VBCS for ADF BC REST

Pages

Subscribe to Oracle FAQ aggregator