Feed aggregator

Oracle Launches Internet Intelligence Map Providing a Unique View into the Global Internet

Oracle Press Releases - Wed, 2018-06-13 11:25
Press Release
Oracle Launches Internet Intelligence Map Providing a Unique View into the Global Internet Free dashboard delivers insight into the impact of internet disruptions

VELOCITY, San Jose, Calif.—Jun 13, 2018

Oracle (NYSE:ORCL) today announced availability of the Internet Intelligence Map, providing users with a simple, graphical way to track the health of the internet and gain insight into the impact of events such as natural disasters or state-imposed interruptions. The map is part of Oracle’s Internet Intelligence initiative, which provides insight and analysis on the state of global internet infrastructure. To access the free Internet Intelligence Map, visit internetintel.oracle.com.

For more than a decade, members of Oracle’s Internet Intelligence team have broken some of the biggest stories about the internet. From BGP hijacks to submarine cable breaks, Oracle’s Internet Intelligence team frequently publishes objective data and analysis that informs public understanding of the technical underpinnings of the internet and its effects on topics like geopolitics and e-commerce. With today’s news, Oracle is now making core analytic capabilities available to everyone via the Internet Intelligence Map. Using one of the world’s most comprehensive internet performance data sets and backed by years of research and analytics, Oracle has developed the premier resource and authority for reliable information on the functioning of the internet.

“The internet is the world’s most important network, yet it is incredibly volatile. Disruptions on the internet can affect companies, governments, and network operators in profound ways,” said Kyle York, vice president of product strategy for Oracle Cloud Infrastructure and the general manager for Oracle’s Dyn Global Business Unit. “As a result, all of these stakeholders need better visibility into the health of the global internet. With this offering, we are delivering on our commitment to making it a better, more stable experience for all who rely on it.”

The Internet Intelligence Map presents country-level connectivity statistics based on traceroutes, BGP, and DNS query volumes on a single dashboard. By presenting these three dimensions of internet connectivity side-by-side, users can investigate the impact of an issue on internet connectivity worldwide.

“It’s important to have a global view of the internet in order to understand how external events prevent users from reaching your web-based applications and services. It is only when you have this insight that you can work around those issues to improve availability and performance,” said Jim Davis, Founder and Principal Analyst of Edge Research Group.

The Internet Intelligence Map is just one of many advanced awareness and visibility tools that help Oracle improve the experience of the cloud by making it better and more reliable every day. This offering is powered by Oracle Cloud Infrastructure, which offers a set of core infrastructure services to provide customers the ability to run any workload in the cloud. Only Oracle Cloud Infrastructure provides the compute, storage, networking, and edge services necessary to deliver the end-to-end performance required of today's modern enterprise.

Contact Info
Danielle Tarp
Oracle
+1.408.921.7063
danielle.tarp@oracle.com
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle Cloud Infrastructure

Oracle Cloud Infrastructure combines the benefits of public cloud (on-demand, self-service, scalability, pay-for-use) with those benefits associated with on-premises environments (governance, predictability, control) into a single offering. Oracle Cloud Infrastructure takes advantage of a high-scale, high-bandwidth network that connects cloud servers to high-performance local, block, and object storage to deliver a cloud platform that yields the highest performance for traditional and distributed applications, as well as highly available databases. With the acquisitions of Dyn and Zenedge, Oracle Cloud Infrastructure extended its offering to include Dyn’s best-in-class DNS and email delivery solutions and Zenedge’s next-generation Web Application Firewall (WAF) and Distributed Denial of Service (DDoS) capabilities.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Danielle Tarp

  • +1.408.921.7063

Nicole Maloney

  • +1.650.506.0806

ORA-01704: string literal too long

Flavio Casetta - Wed, 2018-06-13 08:31
Categories: DBA Blogs

New Oracle Autonomous Cloud Services Ease Mobile Development, Data Integration

Oracle Press Releases - Wed, 2018-06-13 07:00
Press Release
New Oracle Autonomous Cloud Services Ease Mobile Development, Data Integration AI-based PaaS services cut costs and speed development of chatbots, data integration, and API management

Redwood Shores, Calif.—Jun 13, 2018

Oracle (NYSE: ORCL) today announced the availability of its next-generation Oracle Cloud Platform services featuring built-in autonomous capabilities, including Oracle Mobile Cloud Enterprise, Oracle Data Integration Platform Cloud, and Oracle API Platform Cloud. With embedded artificial intelligence (AI) and machine learning, these platform services automate operational tasks to enable organizations to lower cost, reduce risk, accelerate innovation, and get predictive insights.

To continuously innovate while keeping pace with dynamic business environments, enterprises need secure, comprehensive, integrated cloud services to build new applications and run their most demanding workloads. Only Oracle’s cloud services can intelligently automate key operational functions, including tuning, patching, backups, and upgrades giving organizations more time to focus on strategic business activities, while delivering maximum performance, high availability, and critical security features.

“There is tremendous value for our customers in embedding AI and machine learning capabilities throughout our entire cloud portfolio,” said Amit Zavery, executive vice president of development, Oracle Cloud Platform. “Customers have embraced our vision for an autonomous enterprise. With Oracle’s autonomous platform services, organizations can capitalize on AI to reduce costs, speed innovation, and transform how they do business.”

Last year, Oracle outlined its vision for an autonomous enterprise, unveiling the world’s first Autonomous Database. Since then, the company announced it would extend autonomous capabilities to its entire cloud platform portfolio. Delivering on that promise, Oracle recently made available a number of autonomous platform services, including Oracle Autonomous Data Warehouse Cloud, Oracle Analytics Cloud, Oracle Integration Cloud, and Oracle Visual Builder. With today’s availability of another set of autonomous platform services, Oracle continues to build momentum. Later in 2018, Oracle plans to release more autonomous capabilities focused on Blockchain, security and management, and additional database workloads, including OLTP and NoSQL.

Mutua Madrid Open Develops MatchBot with Oracle Cloud Platform

Mutua Madrid Open became the first ATP World Tour Masters 1000 and Premier WTA tournament to incorporate an AI-equipped chatbot to improve communication with tennis fans. Implemented with Oracle Cloud Platform, the chatbot, named “MatchBot,” used AI to maintain natural conversations that provided fans with information on the event, players, and results, as well as details on hospitality services, discounts on merchandise, ticket sales, access, and parking.

“We wanted to position the Mutua Madrid Open as the tournament of the 21st century,” said Gerard Tsobanian, president and CEO of Mutua Madrid Open. “Development of the MatchBot positions us at the forefront of technology and innovation. With this new technology, we were able to provide visitors with an amazing experience—a pleasant, simpler, and faster way to get the information they wanted about the tournament.”

New Oracle Cloud Platform Services Featuring Built-in AI

By adding autonomous capabilities to its latest set of Platform Cloud services, Oracle helps organizations easily develop new innovative user experiences with chatbot and voice capabilities, enables business users to perform intelligent data integration tasks, and exposes business logic and data via robust API design/management.

Oracle Mobile Cloud Enterprise

Oracle Mobile Cloud Enterprise provides a complete, open, and proven enterprise platform to develop, deliver, analyze, and manage mobile apps and AI-powered chatbots that connect and extend any backend system in a secure, scalable manner. Learn more here.

  • Self-learning chatbots observe interactions, preferences to automate frequently performed actions.
  • Automated learning from conversations ensures higher accuracy in the machine learning for the smart bots, including seamless handoffs to human agents via trained AI algorithms.

     

  • Automatic generation of QnA chatbots extract knowledge from unstructured data by leveraging machine learning.
  Oracle Data Integration Platform Cloud

Oracle Data Integration Platform Cloud helps organizations make better and faster decisions by providing access to all data, and unlocking value from data faster through a combination of machine learning and artificial intelligence powered features that stream, migrate, transform, enrich, and govern data from anywhere. Learn more here.

  • Simplifies and automates creation of big data lakes and data warehouses through one click self-defining tasks to deliver streaming or batch data, thereby improving standardization and efficiency for big data projects.
  • Delivers self-optimizing data pipelines for rapid data delivery into Oracle Autonomous Data Warehouse Cloud.
  • Enables trusted, governed cloud-based analytics through system-guided data sharing between SaaS, on-premises, and hybrid business applications.
  • Speeds up self-service data preparation through machine assisted data curation and eliminates manual work when creating data pipelines.
  Oracle API Platform Cloud

Oracle API Platform Cloud supports agile API-first design and development, enabling hybrid deployment of the API gateway across Oracle Cloud, on-prem and 3rd party clouds.  It provides insight on KPIs covering every aspect of the API lifecycle, while employing the most up-to-date security protocols. Learn more here.

  • Continuously learns about usage patterns, and recommends plans allocation limits and configurations.
  • Based on other policy usage and configurations, recommends policies and policy configurations using predictive algorithms to API Managers.
  Oracle Developer Cloud

Oracle Developer Cloud is a complete development platform included with all Oracle Cloud services that automates software development and delivery cycles, and helps teams manage agile development processes. With an easy-to-use web interface and integration with popular development tools, it helps deliver better applications faster. Learn more here

  • Builds automation for multiple development languages and environments supporting a variety of popular build frameworks such as Maven, Ant, Gradle, npm, Grunt, Gulp, and Bower.
  • Tests automation with popular testing frameworks such as JUnit and Selenium enables automation of both logic and UI testing.
  • Environment provisioning automation via command line interfaces for Docker, Kubernetes, and Terraform.
  • Continuous integration automation through visual pipeline flows. Monitor live execution of pipelines and build job execution history from a central location.
Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Kristin Reeves
Blanc and Otus
+1.925.787.6744
kristin.reeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Kristin Reeves

  • +1.925.787.6744

Active clone recover needed arc files

Tom Kyte - Wed, 2018-06-13 04:30
Oracle 12.2 I am running the following command: *.db_file_name_convert='/db1/ifddb1/dbf/','/db1/ifdtest1/dbf/' *.log_file_name_convert='/redo/ifddb1/redologs/','/redo/ifdtest1/redologs/' sqlplus ' / as sysdba ' <<EOT shutdown abort startu...
Categories: DBA Blogs

Use DBMS_OUTPUT or HTP depending call origin

Tom Kyte - Wed, 2018-06-13 04:30
Hey, I have this procedure: <code>procedure output(i_msg in varchar2) is begin dbms_output.put_line(i_msg); -- if call is from apex i want to use htp.p end;</code> Is it possible to switch the "output chanel" to htp when the caller is...
Categories: DBA Blogs

Creating dll for executing external procedure('c' language)

Tom Kyte - Wed, 2018-06-13 04:30
Hi Tom, I am using Wint NT,Oracle 8i(server) and C language. My goal is calling 'c' routine thru stored procedure. For that I had made neccesary steps.I had modified Tnsnames and listener entry as follows. Tnsnames entry...
Categories: DBA Blogs

Avoid duplicates

Tom Kyte - Wed, 2018-06-13 04:30
Hi Tom, Thanks for your time. We have the scenario like this... DML on the table t should not populate the table with duplicate entries. So if we have table with data in it as : create table t ( x number); insert into t values (1);...
Categories: DBA Blogs

Introducing SQL managed instances on Azure

Yann Neuhaus - Wed, 2018-06-13 00:43

I never wrote about data platform solutions on Azure so far. The fact is in Switzerland we’re definitely late about the Cloud adoption and Azure data platform solutions. There are different reasons that are more or less valid but I don’t want to contribute to any debate here. In any case the last announcements in this field with Azure data centers in Switzerland could encourage customers to reconsider this topic in the near future. Don’t get me wrong here, it doesn’t mean that customers in Switzerland must move all their database infrastructures in the Cloud but this is just an opportunity for them to consider rearchitecting some pieces of their information system including databases that may lead to hybrid scenarios. It will be likely the first step to the cloud adoption for data platforms solutions. At dbi services we didn’t exclude Azure from our target but we just silently continued to keep an eye on the data platform stack awaiting the right opportunity to move up a gear.

blog 138 - 0 - banner

Why to begin with SQL managed instances (MI)? After all, this feature is still on preview and it already exists a lot of Azure solutions as Azure SQL Databases (singleton and with elastic pool as well) as well as SQL Server on Azure VMs.

The point is this new feature is interesting in many aspects. Firstly, it will address the gap that currently exists between IaaS infrastructures (SQL Server VMs) and fully-managed services (with Azure SQL DBs). The former still requires maintaining the operating system (and licenses) while the latter didn’t expose all the feature surface needed by various application scenarios.

At the same time, Microsoft introduced another purchasing model that based on VCore. I remembered a discussion with one of my customers sometimes ago about DTUs. He asked me what is exactly DTU and I pointed out the following sentence from the Microsoft documentation.

The amount of resources is calculated as a number of Database Transaction Units or DTUs and is a bundled measure of compute, storage, and IO resources

That is definitely a good way to simply resource management because it makes an abstraction of the physical resources but this is probably it weakness in some degrees. Indeed, how to translate what DBAs and infrastructure team usually manage from on-premises to the cloud? Obviously, Microsoft provided a calculator to help customers to address their questions before moving to the cloud but the fact is database administrators seem to not be comfortable to deal with DTUs . But now let’s talk about flexibility: in many databases scenarios we don’t want to increase / decrease resources in the same bundle but we want to get a better control of the resource configuration by dissociating compute (CPU / RAM) from the storage. From my experience, I had never seen one customer to scale compute and storage in the same manner regarding their workload. Indeed, some workloads require high-performance storage while others are more CPU-bound. This is where the new vCore-based model comes into play by and I believe it will get a better adoption from customers to smoothly move to the cloud. That’s at least my opinion!

So, let’s try to play with MI. As a reminder, currently it is in preview but that’s enough to get a picture of what you may expect in the future. In my demo, I will use intensively CLI tools with dedicated PowerShell cmdlets and mssql-cli as well. This is voluntary because the fact is more and more administration tasks are done in this way and Microsoft provided all the commands to achieve them.

[dab@DBI-LT-DAB:#]> Get-AzureRmResourceGroup -Name sql-mi-rg


ResourceGroupName : sql-mi-rg
Location          : westeurope
ProvisioningState : Succeeded
Tags              :
ResourceId        : /subscriptions/913528f5-f1f8-4d61-af86-30f2eb0839ba/resourceGroups/sql-mi-rg

[dab@DBI-LT-DAB:#]> Get-AzureRmResource -ResourceGroupName sql-mi-rg | ft Name, ResourceType, Location -AutoSize

Name                                                    ResourceType                             Location
----                                                    ------------                             --------
sql-mi-client_OsDisk_1_842d669310b04cbd8352962c4bda5889 Microsoft.Compute/disks                  westeurope
sql-mi-client                                           Microsoft.Compute/virtualMachines        westeurope
shutdown-computevm-sql-mi-client                        Microsoft.DevTestLab/schedules           westeurope
sql-mi-client453                                        Microsoft.Network/networkInterfaces      westeurope
sql-mi-client-nsg                                       Microsoft.Network/networkSecurityGroups  westeurope
sqlmiclientnsg675                                       Microsoft.Network/networkSecurityGroups  westeurope
sql-mi-client-ip                                        Microsoft.Network/publicIPAddresses      westeurope
sqlmiclientip853                                        Microsoft.Network/publicIPAddresses      westeurope
sql-mi-routetable                                       Microsoft.Network/routeTables            westeurope
sql-mi-vnet                                             Microsoft.Network/virtualNetworks        westeurope
sql-mi-dbi                                              Microsoft.Sql/managedInstances           westeurope
sql-mi-dbi/ApplixEnterprise                             Microsoft.Sql/managedInstances/databases westeurope
sql-mi-dbi/dbi_tools                                    Microsoft.Sql/managedInstances/databases westeurope
VirtualClustersql-mi-subnet                             Microsoft.Sql/virtualClusters            westeurope
sqlmirgdiag947                                          Microsoft.Storage/storageAccounts        westeurope

My MI is composed of difference resources including:

  • VirtualClustersql-mi-subnet – a logical container of managed instances?
  • sql-mi-dbi as managed instance
  • sql-mi-dbi/ApplixEnterprise and sql-mi-dbi/dbi_tools as managed databases.
  • Network components including sql-mi-vnet, sql-mi-routetable

Here some more details of my MI:

[dab@DBI-LT-DAB:#]> Get-AzureRmSqlManagedInstance | ft ManagedInstanceName, Location, ResourceGroupName, LicenseType, VCores, StorageSizeInGB -AutoSize

ManagedInstanceName Location   ResourceGroupName LicenseType     VCores StorageSizeInGB
------------------- --------   ----------------- -----------     ------ ---------------
sql-mi-dbi          westeurope sql-mi-rg         LicenseIncluded      8              32

 

I picked up a GEN4 configuration based on General Purpose pricing that includes 8 VCores and 32GB of storage.

My managed databases are as follows:

[dab@DBI-LT-DAB:#]> Get-AzureRmSqlManagedDatabase -ManagedInstanceName sql-mi-dbi -ResourceGroupName sql-mi-rg | ft Name, ManagedInstanceName, Location, DefaultSecondaryLoc
ation, Status, Collation  -AutoSize

Name             ManagedInstanceName Location   DefaultSecondaryLocation Status Collation
----             ------------------- --------   ------------------------ ------ ---------
dbi_tools        sql-mi-dbi          westeurope northeurope              Online Latin1_General_CS_AS_KS
ApplixEnterprise sql-mi-dbi          westeurope northeurope              Online SQL_Latin1_General_CP1_CI_AS

 

Other resources are related to my other virtual client machine to connect to my MI. Indeed, the latter is not exposed through a public endpoint and it is reachable only from an internal network. I didn’t setup a site-to-site VPN to connect the MI from my remote laptop.

Another point that drew my attention is the high availability feature which is based on remote storage and Azure Service Fabric.

blog 138 - 5 - azure - sql managed instances - HA

Do you remember of the VirtualClustersql-mi-subnet described earlier? In fact, my MI is built upon a service fabric. Referring to the Microsoft documentation a Service Fabric enables you to build and manage scalable and reliable applications composed of microservices that run at high density on a shared pool of machines, which is referred to as a cluster.

We may get a picture of this underlying cluster from a set of dedicated sys.dm_hadr_fabric_* DMVs with a high-level view of the underlying cluster …

blog 138 - 6 - azure - sql managed instances - DMVs

… and a more detailed view including my managed databases:

blog 138 - 7 - azure - sql managed instances - DMVs detail

Now let’s get basic information from my MI:

blog 138 - 1 - azure - sql managed instances - engine version

The MI version may be easily identified by the engine_sql number equal to 8.

As said previously I have two user databases that exist and they were restored from backups hosted on my blob storage container.

Here an example of commands I used to restore them. You probably recognize the native RESTORE FROM URL syntax. Note also that you have different ways to restore / migrate your databases from on-premises environment with BACPAC and Azure Database Migration Service as well.

RESTORE FILELISTONLY 
FROM URL = 'https://mikedavemstorage.blob.core.windows.net/backup/ApplixEnterprise2014.bak'

RESTORE DATABASE [ApplixEnterprise] 
FROM URL = 'https://mikedavemstorage.blob.core.windows.net/backup/ApplixEnterprise2014.bak';

 

Here a list of my existing user databases:

blog 138 - 2 - azure - sql managed instances - databases

Let’s go further with database files configuration:

SELECT 
	DB_NAME(database_id) AS [db_name],
	file_id,
	type_desc,
	name AS [logical_name],
	physical_name,
	state_desc AS [state],
	size / 128 AS size_MB,
	max_size
FROM
	sys.master_files;
GO

 

blog 138 - 3 - azure - sql managed instances - database files

 

Some interesting points here:

1 – tempdb is pre-configured with 12 data files and 16MB each? Probably far from our usualk recommendation but anyway the preview allows to change it by using DBA usual scripts.

2- We may also notice that the user databases are placed on a different storage types (premium disk according to the Microsoft documentation). System databases are hosted to a local path C:\WFRoot\DB 3\Fabric\work\data\ as well as well the tempdb database. I use a standard tier meaning that system DBs are all on an attached SSD included in the VCore price.

Just for fun, I tried to install our DMK maintenance tool which basically creates a dbi_tools database with maintenance objects (tables and stored procedures) and related SQL Server agent jobs. A databasemail configuration step is also part of the DMK installation and the good news is the feature is available with MIs. However, I quickly ran into was about some ALTER DATABASE commands we use at the beginning of the deployment script:

Msg 5008, Level 16, State 14, Line 72
This ALTER DATABASE statement is not supported. Correct the syntax and execute the statement again.
Msg 5069, Level 16, State 1, Line 72
ALTER DATABASE statement failed.
Msg 5008, Level 16, State 14, Line 89
This ALTER DATABASE statement is not supported. Correct the syntax and execute the statement again.
Msg 5069, Level 16, State 1, Line 89
ALTER DATABASE statement failed.

 

The fix was quite easy and I finally managed to deploy the tool as show below:

blog 138 - 4 - azure - sql managed instances - dbi tools

The next step consisted in testing our different maintenance tasks:

  • Database integrity check task
  • Index maintenance task
  • Update statistics maintenance task
  • Backup task

The first 3 tasks worked well without any modification. However, for backups, I needed to used URL-based backups because it is the only method supported so far. Unfortunately, the current version of our DMK maintenance tool doesn’t not handled correctly it does shared access signatures that come with BACKUP TO URL command since SQL Server 2016. The fix will be included soon to the next release for sure :). For the context of my test I modified a little bit the statement generated by the maintenance objects and it worked perfectly:

-- Backup database dbi_tools
BACKUP DATABASE  [dbi_tools] 
TO URL = 'https://sqlmidbistorage.blob.core.windows.net/sqlmidbicontainer/sql-mi-dbi.weu157eee5444ccf.database.windows.net_dbi_tools_1_20180611210123.BAK'
WITH COPY_ONLY, CHECKSUM, INIT, FORMAT;

--Verification of the backup https://sqlmidbistorage.blob.core.windows.net/sqlmidbicontainer/sql-mi-dbi.weu157eee5444ccf.database.windows.net_dbi_tools_1_20180611210123.BAK
RESTORE VERIFYONLY FROM URL = 'https://sqlmidbistorage.blob.core.windows.net/sqlmidbicontainer/sql-mi-dbi.weu157eee5444ccf.database.windows.net_dbi_tools_1_20180611210123.BAK'
WITH STATS = 100, CHECKSUM;

 

And to finish this blog post as a good DBA, let’s have a look at the resource allocation management. First time I took a look at the resources available on the MI I was very surprised. To get an idea, let’s run some DMVs as sys.dm_os_schedulers, sys.dm_os_sys_memory or sys.dm_os_sys_info DMVs to get a real picture of these aforementioned resources:

blog 138 - 8 - azure - sql managed instances - resources

Given the number of visible online schedulers only 8 may be used by the MI. This is an expected outcome according to my initial configuration. Concerning the memory configuration, the theorical amount of memory available I can get should be 8 x 7GB = 56GB according the Microsoft documentation and the sys.dm_os_sys_memory DMV doesn’t really indicate such capping while the sys.dm_os_sys_info DMV does (at least closer to the reality)

blog 138 - 10 - azure - sql managed instances - memory capping

Are the CPU and memory resources managed differently on MI? I found out the right answer in this article from the SQL Server customer advisor team. For MI, this mechanism that is responsible of resource management is called Job Objects. That’s very interesting! Without going into details, this is exactly the same mechanism used by Docker on Windows and it is similar (at least in the concept) to existing CGroups on Linux.

Therefore, we may also benefit from another DMV to get details of resource management:

blog 138 - 11 - azure - sql managed instances - job object dmv

Thanks to this SQL Server customer advisor team article, the situation becomes clearer with the following parameter values:

  • cpu_rate 100% indicates my vCores are used at 100% of their capacity
  • cpu_affinity_mask indicates we are limited to 8 OS level processors
  • process_memory_limit_mb is self-explanatory and corresponds to my previous theorical assumptions :)
  • non_sos_mem_gap_mb corresponds to a safe amount of available memory for non-SQLOS activity

 

Conclusion

I think Microsoft is doing a great strategic play by introducing this solution for customers. Indeed, change is always a challenge and moving towards something almost similar to what we already know allows a smooth transition and a better adoption from humans. We will see what happens!

 

Cet article Introducing SQL managed instances on Azure est apparu en premier sur Blog dbi services.

6 Easy Steps to Get You Started on Wireframing

Nilesh Jethwa - Tue, 2018-06-12 23:08

A wireframe is a blueprint of sorts that serves as the initial design of a website. In web development, the main purpose of wireframing is to provide an optimal and efficient design for a website without being hindered or affected … Continue reading ?

Origin: MockupTiger Wireframes

OAC - Thoughts on Moving to the Cloud

Rittman Mead Consulting - Tue, 2018-06-12 12:43

Last week, I spent a couple of days with Oracle at Thames Valley Park and this presented me with a perfect opportunity to sit down and get to grips with the full extent of the Oracle Analytics Cloud (OAC) suite...without having to worry about client requirements or project deadlines!

As a company, Rittman Mead already has solid experience of OAC, but my personal exposure has been limited to presentations, product demonstrations, reading the various postings in the blog community and my existing experiences of Data Visualisation and BI cloud services (DVCS and BICS respectively). You’ll find Francesco’s post a good starting place if you need an overview of OAC and how it differs (or aligns) to Data Visualisation and BI Cloud Services.

So, having spent some time looking at the overall suite and, more importantly, trying to interpret what it could mean for organisations thinking about making a move to the cloud, here are my top three takeaways:

Clouds Come In Different Shapes and Flavours

Two of the main benefits that a move to the cloud offers are simplification in platform provisioning and an increase in flexibility, being able to ramp up or scale down resources at will. These both comes with a potential cost benefit, depending on your given scenario and requirement. The first step is understanding the different options in the OAC licensing and feature matrix.

First, we need to draw a distinction between Analytics Cloud and the Autonomous Analytics Cloud (interestingly, both options point to the same page on cloud.oracle.com, which makes things immediately confusing!). In a nutshell though, the distinction comes down to who takes responsibility for the service management: Autonomous Analytics Cloud is managed by Oracle, whilst Analytics Cloud is managed by yourself. It’s interesting to note that the Autonomous offering is marginally cheaper.

Next, Oracle have chosen to extend their BYOL (Bring Your Own License) option from their IaaS services to now incorporate PaaS services. This means that if you have existing licenses for the on-premise software, then you are able to take advantage of what appears to be a significantly discounted cost. Clearly, this is targeted to incentivise existing Oracle customers to make the leap into the Cloud, and should be considered against your ongoing annual support fees.

Since the start of the year, Analytics Cloud now comes in three different versions, with the Standard and Enterprise editions now being separated by the new Data Lake edition. The important things to note are that (possibly confusingly) Essbase is now incorporated into the Data Lake edition of the Autonomous Analytics Cloud and that for the full enterprise capability you have with OBIEE, you will need the Enterprise edition. Each version inherits the functionality of its preceding version: Enterprise edition gives you everything in the Data Lake edition; Data Lake edition incorporates everything in the Standard edition.

alt

Finally, it’s worth noting that OAC aligns to the Universal Credit consumption model, whereby the cost is determined based on the size and shape of the cloud that you need. Services can be purchased as Pay as You Go or Monthly Flex options (with differential costing to match). The PAYG model is based on hourly consumption and is paid for in arrears, making it the obvious choice for short term prototyping or POC activities. Conversely, the Monthly Flex model is paid in advance and requires a minimum 12 month investment and therefore makes sense for full scale implementations. Then, the final piece of the jigsaw comes with the shape of the service you consume. This is measured in OCPU’s (Oracle Compute Units) and the larger your memory requirements, the more OCPU’s you consume.

Where You Put Your Data Will Always Matter

Moving your analytics platform into the cloud may make a lot of sense and could therefore be a relatively simple decision to make. However, the question of where your data resides is a more challenging subject, given the sensitivities and increasing legislative constraints that exist around where your data can or should be stored. The answer to that question will influence the performance and data latency you can expect from your analytics platform.

OAC is architected to be flexible when it comes to its data sources and consequently the options available for data access are pretty broad. At a high level, your choices are similar to those you would have when implementing on-premise, namely:

  • perform ELT processing to transform and move the data (into the cloud);
  • replicate data from source to target (in the cloud) or;
  • query data sources via direct access.

These are supplemented by a fourth option to use the inbuilt Data Connectors available in OAC to connect to cloud or on-premise databases, other proprietary platforms or any other source accessible via JDBC. This is probably a decent path for exploratory data usage within DV, but I’m not sure it would always make the best long term option.

alt

Unsurprisingly, with the breadth of options comes a spectrum of tooling that can be used for shifting your data around and it is important to note that depending on your approach, additional cloud services may or may not be required.

For accessing data directly at its source, the preferred route seems to be to use RDC (Remote Data Connector), although it is worth noting that support is limited to Oracle (including OLAP), SQL Server, Teredata or DB2 databases. Also, be aware that RDC operates within WebLogic Server and so this will be needed within the on-premise network.

Data replication is typically achieved using Data Sync (the reincarnation of the DAC, which OBIA implementers will already be familiar with), although it is worth mentioning that there are other routes that could be taken, such as APEX or SQL Developer, depending on the data volumes and latency you have to play with.

Classic ELT processing can be achieved via Oracle Data Integrator (either the Cloud Service, a traditional on-premise implementation or a hybrid-model).

Ultimately, due care and attention needs to be taken when deciding on your data architecture as this will have a fundamental effect on the simplicity with which data can be accessed and interpreted, the query performance achieved and the data latency built into your analytics.

Data Flows Make For Modern Analytics Simplification

A while back, I wrote a post titled Enabling a Modern Analytics Platform in which I attempted to describe ways that Mode 1 (departmental) and Mode 2 (enterprise) analytics could be built out to support each other, as opposed to undermining one another. One of the key messages I made was the importance of having an effective mechanism for transitioning your Mode 1 outputs back into Mode 2 as seamlessly as possible. (The same is true in reverse for making enterprise data available as an Mode 1 input.)

One of the great things about OAC is how it serves to simplify this transition. Users are able to create analytic content based on data sourced from a broad range of locations: at the simplest level, Data Sets can be built from flat files or via one of the available Data Connectors to relational, NoSQL, proprietary database or Essbase sources. Moreover, enterprise curated metadata (via RPD lift-and-shift from an on-premise implementation) or analyst developed Subject Areas can be exposed. These sources can be ‘mashed’ together directly in a DV project or, for more complex or repeatable actions, Data Flows can be created to build Data Sets. Data Flows are pretty powerful, not only allowing users to join disparate data but also perform some useful data preparation activities, ranging from basic filtering, aggregation and data manipulation actions to more complex sentiment analysis, forecasting and even some machine learning modelling features. Importantly, Data Flows can be set to output their results to disk, either written to a Data Set or even to a database table and they can be scheduled for repetitive refresh.

For me, one of the most important things about the Data Flows feature is that it provides a clear and understandable interface which shows the sequencing of each of the data preparation stages, providing valuable information for any subsequent reverse engineering of the processing back into the enterprise data architecture.

alt

In summary, there are plenty of exciting and innovative things happening with Oracle Analytics in the cloud and as time marches on, the case for moving to the cloud in one shape or form will probably get more and more compelling. However, beyond a strategic decision to ‘Go Cloud’, there are many options and complexities that need to be addressed in order to make a successful start to your journey - some technical, some procedural and some organisational. Whilst a level of planning and research will undoubtedly smooth the path, the great thing about the cloud services is that they are comparatively cheap and easy to initiate, so getting on and building a prototype is always going to be a good, exploratory starting point.

Categories: BI & Warehousing

PDB AWR Host CPU

Tom Kyte - Tue, 2018-06-12 10:06
Does the "Host CPU" in the PDB AWR mean the CPU usage for the PDB, CDB or the host?
Categories: DBA Blogs

Insert multiple csv in a zip

Tom Kyte - Tue, 2018-06-12 10:06
Hello, My requirement is to 1. Read two query result and write into two different CSV files 2. Zip these two CSV files in a single zip file. There is a similar qn posted that tells who to create csv file and store the result (clob and blob) i...
Categories: DBA Blogs

Sql Execution Time v/s Elapsed time v/s CPU Time

Tom Kyte - Tue, 2018-06-12 10:06
Hello , I have been working on Database monitoring stuff where we are looking for long running queries in DB . From the inbuilt setup which i have received from DBA , its showing CPU_TIME and ELAPSED_TIME but none of them is matching with Oracle E...
Categories: DBA Blogs

Why DevOps Matters for Enterprise BI

Rittman Mead Consulting - Tue, 2018-06-12 09:44
Why DevOps Matters for Enterprise BI

Why are people frustrated with their existing enterprise BI tools such as OBIEE? My view is because it costs too much to produce relevant content. I think some of this is down to the tools themselves, and some of it is down to process.

Starting with the tools, they are not “bad” tools; the traditional licensing model can be expensive in today’s market, and traditional development methods are time-consuming and hence expensive. The vendor’s response is to move to the cloud and to highlight cost savings that can be made by having a managed platform. Oracle Analytics Cloud (OAC) is essentially OBIEE installed on Oracle’s servers in Oracle’s data centres with Oracle providing your system administration, coupled with the ability to flex your licensing on a monthly or annual basis.

Cloud does give organisations the potential for more agility. Provisioning servers can no longer hold up the start of a project, and if a system needs to increase capacity, then more CPUs or nodes can be added. This latter case is a bit murky due to the cost implications and the option to try and resolve performance issues through query efficiency on the database.

I don’t think this solves the problem. Tools that provide reports and dashboards are becoming more commoditised, up and coming vendors and platform providers are offering the service for a fraction of the cost of the traditional vendors. They may lack some of the enterprise features like open security models; however, these are an area that platform providers are continually improving. Over the last 10 years, Oracle's focus for OBIEE has been on more on integration than innovation. Oracle DV was a significant change; however, there is a danger that Oracle lost the first-mover advantage to tools such as Tableau and QlikView. Additionally, some critical features like lineage, software lifecycle development, versioning and process automation are not built in to OBIEE and worse still, the legacy design and architecture of the product often hinders these.

So this brings me back round to process. Defining “good” processes and having tools to support them is one of the best ways you can keep your BI tools relevant to the business by reducing the friction in generating content.

What is a “good” process? Put simply, a process that reduces the time between the identification of a business need and the realising it with zero impact on existing components of the system. Also, a “good” process should provide visibility of any design, development and testing, plus documentation of changes, typically including lineage in a modern BI system. Continuous integration is the Holy Grail.

This why DevOps matters. Using automated migration across environments, regression tests, automatically generated documentation in the form of lineage, native support for version control systems, supported merge processes and ideally a scripting interface or API to automate the generation of repetitive tasks such as changing the data type of a group of fields system-wide, can dramatically reduce the gap from idea to realisation.

So, I would recommend that when looking at your enterprise BI system, you not only consider the vendor, location and features but also focus on the potential for process optimisation and automation. Automation could be something that the vendor builds into the tool, or you may need to use accelerators or software provided by a third party. Over the next few weeks, we will be publishing some examples and case studies of how our BI and DI Developer Toolkits have helped clients and enabled them to automate some or all of the BI software development cycle, reducing the time to release new features and increasing the confidence and robustness of the system.

Categories: BI & Warehousing

Oracle Empowers Retailers to Detect Omnichannel Theft with Embedded Science

Oracle Press Releases - Tue, 2018-06-12 08:00
Press Release
Oracle Empowers Retailers to Detect Omnichannel Theft with Embedded Science New Innovations in Oracle Retail XBRi Cloud Service Reveal New Sources of Omnichannel Risk and Provides Investigators with Automated Alerts and Actionable Insights

Redwood Shores, Calif.—Jun 12, 2018

Today Oracle announced enhancements to Oracle Retail XBRi Loss Prevention Cloud Service that help retailers quickly identify and investigate fraudulent activities across store and online transactions. With embedded science, Oracle Retail XBRi can detect new data anomalies and alert loss prevention analysts to potential risks and drive quick resolution by providing the transactional insight needed to investigate. The Oracle Retail XBRi enhancements also include embedded science developed over 20 years of research and development in retail including more than 4,000 reporting metrics and attributes representing experiences and best practices of global retailers.

“The opportunity for retail theft grows as retailers reach to meet customer expectations and struggle to manage individual channels of engagement,” said Jeff Warren, vice president, Oracle Retail. “The updates to Oracle Retail XBRi provide customers with the tools to protect their margins and is another example of the continued value we aim to deliver through cloud services.”

“This new generation of cloud services accelerates the path to ROI within months of deployment,” said Chris Sarne, senior director of omnichannel strategy, Oracle Retail.  “The result is a smart, holistic approach to preventing fraud and saving money that scales to grow with your business.”

Oracle Retail XBRi Loss Prevention Cloud Service provides the retail community with:
  • Agility, Security and a Path to Lower Total Cost of Ownership. One of the goals of using embedded science is to clarify and lower the total cost of installation, set up, and maintenance of solutions by taking advantage of iterative cloud updates. The cloud-based solution offers even greater precision, building on existing knowledge and adding new features and analytics automatically.
  • Prompt Investigation. Upon deployment, the solution immediately identifies exceptions and escalates them to analysts and investigators without requiring manual intervention to initiate the process.
  • Comprehensive Discovery. The cloud service employs embedded science to quickly surface anomalous behavior and potential cases. With new insight to emerging fraudulent activities comes opportunities to help store managers train their associates to detect and avoid potential loss before it happens.
  • Intuitive Configuration. The Oracle Retail XBRi interface allows analysts to easily configure reports that reflect the unique needs of omnichannel brands without incurring additional costs or relying on IT specialists.
  • POS Integration that Streamlines Investigation.  A heightened degree of integration between Oracle Retail XBRi Cloud Service and Oracle Retail Xstore Point-of-Service further enhances the ability of XBRi’s embedded science to pinpoint new sources of risk and deliver purpose-build reports that streamline and support investigative activities while leveraging the best practices built into the Oracle Retail portfolio.

Oracle will continue to add new knowledge from the Oracle Retail customer community to combat new patterns of fraud that emerge in the complex omnichannel retail environment.

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

The Medrec 12.2.1.3 Datamodel DDL

Darwin IT - Tue, 2018-06-12 06:57
Next week I deliver the training 'Weblogic 12c Tuning and Troubleshooting' . One of the labs is to have the sample application MedRec generate Stuck Threads, so that the students can investigate and try to solve that. Or actually configure the server so that it will cause a automatic restart.

To do so I have to deliberately break the application and so I need the source. I have an earlier version of the application, but not the sources. So I have to go to the latest MedRec. I actually like that, because it looks more modern.

The MedRec application is available if you install WebLogic with samples.

You can run the script demo_oracle.ddl  from against the database:
$WL_HOME/samples/server/examples/src/examples/common/ddl

The medrec.ear can be found at:
$WL_HOME/samples/server/medrec/dist/standalone

I ran in quite some confusion and frustration, but I found that this combination, although from the same samples folder, does not work. Not only this medrec.ear expects the tables in plural (PRESCRIPTIONS) where the script creates them in singular (PRESCRIPTION), it expects a separate DRUGS table with a foreign key column DRUG_ID in PRESCRIPTIONS.  And a few other changes.

I had a version of the scripts from earlier versions of WebLogic's MedRec. Based the exceptions in the server log, I refactored/reverse engineered the scripts.

Using those I could succesfully login and view the Patient records of fred@golf.com:


First we need to create a a schema using createDBUserMedrec.sql:
prompt Create user medrec with connect, resource roles;
grant connect, resource to medrec identified by welcome1;
alter user medrec
default tablespace users
temporary tablespace temp;
alter user medrec quota unlimited on users;

Drop tables (if needed) using medrec_dropall.sql:
DROP TABLE "MEDREC"."ADMINISTRATORS";

DROP TABLE "MEDREC"."OPENJPA_SEQUENCE_TABLE";

DROP TABLE "MEDREC"."PATIENTS";

DROP TABLE "MEDREC"."PATIENTS_RECORDS";

DROP TABLE "MEDREC"."PHYSICIANS";

DROP TABLE "MEDREC"."PRESCRIPTIONS";

DROP TABLE "MEDREC"."DRUGS";

DROP TABLE "MEDREC"."RECORDS";

DROP TABLE "MEDREC"."RECORDS_PRESCRIPTIONS";

Create the tables using medrec_tables.sql:
CREATE TABLE "MEDREC"."ADMINISTRATORS" (
"ID" INTEGER NOT NULL,
"EMAIL" VARCHAR(255),
"PASSWORD" VARCHAR(255),
"USERNAME" VARCHAR(255),
"VERSION" INTEGER,
PRIMARY KEY ( "ID" ) );


CREATE TABLE "MEDREC"."OPENJPA_SEQUENCE_TABLE" (
"ID" SMALLINT NOT NULL,
"SEQUENCE_VALUE" INTEGER,
PRIMARY KEY ( "ID" ) );


CREATE TABLE "MEDREC"."PATIENTS" (
"ID" INTEGER NOT NULL,
"EMAIL" VARCHAR(255),
"PASSWORD" VARCHAR(255),
"USERNAME" VARCHAR(255),
"PHONE" VARCHAR(255),
"DOB" TIMESTAMP,
"GENDER" VARCHAR(20),
"SSN" VARCHAR(255),
"STATUS" VARCHAR(20),
"VERSION" INTEGER,
"FIRSTNAME" VARCHAR(255),
"LASTNAME" VARCHAR(255),
"MIDDLENAME" VARCHAR(255),
"CITY" VARCHAR(255),
"COUNTRY" VARCHAR(255),
"STATE" VARCHAR(255),
"STREET1" VARCHAR(255),
"STREET2" VARCHAR(255),
"ZIP" VARCHAR(255),
PRIMARY KEY ( "ID" ) );


CREATE TABLE "MEDREC"."PATIENTS_RECORDS" (
"PATIENT_ID" INTEGER,
"RECORDS_ID" INTEGER );


CREATE TABLE "MEDREC"."PHYSICIANS" (
"ID" INTEGER NOT NULL,
"EMAIL" VARCHAR(255),
"PASSWORD" VARCHAR(255),
"USERNAME" VARCHAR(255),
"PHONE" VARCHAR(255),
"VERSION" INTEGER,
"FIRSTNAME" VARCHAR(255),
"LASTNAME" VARCHAR(255),
"MIDDLENAME" VARCHAR(255),
PRIMARY KEY ( "ID" ) );


CREATE TABLE "MEDREC"."DRUGS"
( "ID" NUMBER(*,0) NOT NULL ENABLE,
"NAME" VARCHAR2(255 BYTE),
"FREQUENCY" VARCHAR2(255 BYTE),
"PRICE" NUMBER(10,2),
"VERSION" NUMBER(*,0),
PRIMARY KEY ( "ID" ) );


CREATE TABLE "MEDREC"."PRESCRIPTIONS" (
"ID" INTEGER NOT NULL,
"DATE_PRESCRIBED" TIMESTAMP,
"FREQUENCY" VARCHAR(255),
"INSTRUCTIONS" VARCHAR(255),
"REFILLS_REMAINING" INTEGER,
"VERSION" INTEGER,
"DOSAGE" NUMBER,
"DRUG_ID" NUMBER,
PRIMARY KEY ( "ID" ) );

CREATE TABLE "MEDREC"."RECORDS" (
"ID" INTEGER NOT NULL,
"CREATE_DATE" TIMESTAMP,
"RECORDDATE" TIMESTAMP,
"DIAGNOSIS" VARCHAR(255),
"NOTES" VARCHAR(255),
"SYMPTOMS" VARCHAR(255),
"VERSION" INTEGER,
"PATIENT_ID" INTEGER NOT NULL,
"PHYSICIAN_ID" INTEGER NOT NULL,
"DIASTOLIC_BLOOD_PRESSURE" INTEGER,
"HEIGHT" INTEGER,
"PULSE" INTEGER,
"SYSTOLIC_BLOOD_PRESSURE" INTEGER,
"TEMPERATURE" INTEGER,
"WEIGHT" INTEGER,
PRIMARY KEY ( "ID" ) );

CREATE TABLE "MEDREC"."RECORDS_PRESCRIPTIONS" (
"RECORD_ID" INTEGER,
"PRESCRIPTIONS_ID" INTEGER );


Insert data using medrec_data.sql:
INSERT INTO "MEDREC"."ADMINISTRATORS" (
"ID", "EMAIL", "PASSWORD", "USERNAME", "VERSION" )
VALUES (
201,'admin@avitek.com','weblogic','admin@avitek.com',1
);

COMMIT;

INSERT INTO "MEDREC"."OPENJPA_SEQUENCE_TABLE" (
"ID", "SEQUENCE_VALUE" )
VALUES (
0,251
);

COMMIT;

INSERT INTO "MEDREC"."PATIENTS"
VALUES (
51,'page@fish.com','weblogic','page@fish.com','4151234564',
TIMESTAMP '1972-03-18 00:00:00','MALE','888888888','APPROVED',3,
'Page','Trout','A','Ponte Verde','United States','FL',
'235 Montgomery St','Suite 15','32301'
);
INSERT INTO "MEDREC"."PATIENTS"
VALUES (
52,'fred@golf.com','weblogic','fred@golf.com','4151234564',
TIMESTAMP '1965-04-26 00:00:00','MALE','123456789','APPROVED',3,
'Fred','Winner','I','San Francisco','United States','CA',
'1224 Post St','Suite 100','94115'
);
INSERT INTO "MEDREC"."PATIENTS"
VALUES (
53,'volley@ball.com','weblogic','volley@ball.com','4151234564',
TIMESTAMP '1971-09-17 00:00:00','MALE','333333333','APPROVED',3,
'Gabrielle','Spiker','H','San Francisco','United States','CA',
'1224 Post St','Suite 100','94115'
);
INSERT INTO "MEDREC"."PATIENTS"
VALUES (
54,'charlie@star.com','weblogic','charlie@star.com','4151234564',
TIMESTAMP '1973-11-29 00:00:00','MALE','444444444','REGISTERED',3,
'Charlie','Florida','E','Ponte Verde','United States','FL',
'235 Montgomery St','Suite 15','32301'
);
INSERT INTO "MEDREC"."PATIENTS"
VALUES (
55,'larry@bball.com','weblogic','larry@bball.com','4151234564',
TIMESTAMP '1959-03-13 00:00:00','MALE','777777777','APPROVED',3,
'Larry','Parrot','J','San Francisco','United States','CA',
'1224 Post St','Suite 100','94115'
);

COMMIT;

INSERT INTO "MEDREC"."PHYSICIANS" (
"ID", "EMAIL", "PASSWORD", "USERNAME", "PHONE", "VERSION", "FIRSTNAME", "LASTNAME", "MIDDLENAME" )
VALUES (
1,'mary@md.com','weblogic','mary@md.com','1234567812',4,'Mary','Oblige','J'
);

COMMIT;

Insert into "MEDREC"."DRUGS" (ID,NAME,FREQUENCY,PRICE,VERSION) values (101,'Advil','1/4hrs',1.0, 2);
Insert into "MEDREC"."DRUGS" (ID,NAME,FREQUENCY,PRICE,VERSION) values (102,'Codeine','1/6hrs',2.5,2);
Insert into "MEDREC"."DRUGS" (ID,NAME,FREQUENCY,PRICE,VERSION) values (103,'Drixoral','1tspn/4hrs',3.75,2);

COMMIT;

Insert into "MEDREC"."PRESCRIPTIONS" (ID,DATE_PRESCRIBED,FREQUENCY,INSTRUCTIONS,REFILLS_REMAINING,VERSION,DOSAGE,DRUG_ID) values (101,to_timestamp('18-JUL-99 12.00.00.000000000 AM','DD-MON-RR HH.MI.SSXFF AM'),'1/4hrs',null,0,2,1,101);
Insert into "MEDREC"."PRESCRIPTIONS" (ID,DATE_PRESCRIBED,FREQUENCY,INSTRUCTIONS,REFILLS_REMAINING,VERSION,DOSAGE,DRUG_ID) values (102,to_timestamp('30-JUN-93 12.00.00.000000000 AM','DD-MON-RR HH.MI.SSXFF AM'),'1/6hrs',null,1,2,1,102);
Insert into "MEDREC"."PRESCRIPTIONS" (ID,DATE_PRESCRIBED,FREQUENCY,INSTRUCTIONS,REFILLS_REMAINING,VERSION,DOSAGE,DRUG_ID) values (103,to_timestamp('18-JUL-99 12.00.00.000000000 AM','DD-MON-RR HH.MI.SSXFF AM'),'1tspn/4hrs',null,0,2,1,103);


COMMIT;

INSERT INTO "MEDREC"."RECORDS"
VALUES (
151,TIMESTAMP '1991-05-01 00:00:00',TIMESTAMP '1991-05-01 00:00:00','Allergic to coffee. Drink tea.',
'','Drowsy all day.',2,51,1,85,70,75,125,98,180
);
INSERT INTO "MEDREC"."RECORDS"
VALUES (
152,TIMESTAMP '1991-05-01 00:00:00',TIMESTAMP '1991-05-01 00:00:00','Light cast needed.',
'At least 20 sprained ankles since 15.','Sprained ankle.',
2,53,1,85,70,75,125,98,180
);
INSERT INTO "MEDREC"."RECORDS"
VALUES (
153,TIMESTAMP '1989-08-05 00:00:00',TIMESTAMP '1989-08-05 00:00:00','Severely sprained interior ligament. Surgery required.','Cast will be necessary before and after.','Twisted knee while playing soccer.',2,52,1,85,70,75,125,98,180
);
INSERT INTO "MEDREC"."RECORDS"
VALUES (
154,TIMESTAMP '1993-06-30 00:00:00',TIMESTAMP '1993-06-30 00:00:00','Common cold. Prescribed codiene cough syrup.','Call back if not better in 10 days.','Sneezing, coughing, stuffy head.',2,52,1,85,70,75,125,98,180
);
INSERT INTO "MEDREC"."RECORDS"
VALUES (
155,TIMESTAMP '1999-07-18 00:00:00',TIMESTAMP '1999-07-18 00:00:00','Mild stroke. Aspirin advised.','Patient needs to stop smoking.','Complains about chest pain.',2,52,1,85,70,75,125,98,180
);
INSERT INTO "MEDREC"."RECORDS"
VALUES (
156,TIMESTAMP '1991-05-01 00:00:00',TIMESTAMP '1991-05-01 00:00:00','Patient is crazy. Recommend politics.','','Overjoyed with everything.',2,55,1,85,70,75,125,98,180
);

COMMIT;

INSERT INTO "MEDREC"."RECORDS_PRESCRIPTIONS"
VALUES (
154,102
);
INSERT INTO "MEDREC"."RECORDS_PRESCRIPTIONS"
VALUES (
155,101
);
INSERT INTO "MEDREC"."RECORDS_PRESCRIPTIONS"
VALUES (
155,103
);

COMMIT;
To install the datasource you can use this wlst script, createDataSource.py:
#############################################################################
# Create DataSource for WLS 12c Tuning & Troubleshooting workshop
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.1, 2018-01-22
#
#############################################################################
# Modify these values as necessary
import os,sys, traceback
scriptName = sys.argv[0]
adminHost=os.environ["ADM_HOST"]
adminPort=os.environ["ADM_PORT"]
admServerUrl = 't3://'+adminHost+':'+adminPort
ttServerName=os.environ["TTSVR_NAME"]
adminUser='weblogic'
adminPwd='welcome1'
#
dsName = 'MedRecGlobalDataSourceXA'
dsJNDIName = 'jdbc/MedRecGlobalDataSourceXA'
initialCapacity = 5
maxCapacity = 10
capacityIncrement = 1
driverName = 'oracle.jdbc.xa.client.OracleXADataSource'
dbUrl = 'jdbc:oracle:thin:@darlin-vce.darwin-it.local:1521:orcl'
dbUser = 'medrec'
dbPassword = 'welcome1'
#
def createDataSource(dsName, dsJNDIName, initialCapacity, maxCapacity, capacityIncrement, dbUser, dbPassword, dbUrl, targetSvrName):
# Check if data source already exists
try:
cd('/JDBCSystemResources/' + dsName)
print 'The JDBC Data Source ' + dsName + ' already exists.'
jdbcSystemResource=cmo
except WLSTException:
print 'Creating new JDBC Data Source named ' + dsName + '.'
edit()
startEdit()
cd('/')
# Create data source
jdbcSystemResource = create(dsName, 'JDBCSystemResource')
jdbcResource = jdbcSystemResource.getJDBCResource()
jdbcResource.setName(dsName)
# Set JNDI name
jdbcResourceParameters = jdbcResource.getJDBCDataSourceParams()
jdbcResourceParameters.setJNDINames([dsJNDIName])
jdbcResourceParameters.setGlobalTransactionsProtocol('TwoPhaseCommit')
# Create connection pool
connectionPool = jdbcResource.getJDBCConnectionPoolParams()
connectionPool.setInitialCapacity(initialCapacity)
connectionPool.setMaxCapacity(maxCapacity)
connectionPool.setCapacityIncrement(capacityIncrement)
# Create driver settings
driver = jdbcResource.getJDBCDriverParams()
driver.setDriverName(driverName)
driver.setUrl(dbUrl)
driver.setPassword(dbPassword)
driverProperties = driver.getProperties()
userProperty = driverProperties.createProperty('user')
userProperty.setValue(dbUser)
# Set data source target
targetServer = getMBean('/Servers/' + targetSvrName)
jdbcSystemResource.addTarget(targetServer)
# Activate changes
save()
activate(block='true')
print 'Data Source created successfully.'
return jdbcSystemResource

def main():
# Connect to administration server
try:
connect(adminUser, adminPwd, admServerUrl)
#
createDataSource(dsName, dsJNDIName, initialCapacity, maxCapacity, capacityIncrement, dbUser, dbPassword, dbUrl,ttServerName)
#
print("\nExiting...")
exit()
except:
apply(traceback.print_exception, sys.exc_info())
exit(exitcode=1)
#call main()
main()

Also Medrec needs an administrative user, createUser.py:
print 'starting the script ....'
#
adminHost=os.environ["ADM_HOST"]
adminPort=os.environ["ADM_PORT"]
admServerUrl = 't3://'+adminHost+':'+adminPort
#
adminUser='weblogic'
adminPwd='welcome1'
#
realmName = 'myrealm'
#
def addUser(realm,username,password,description):
print 'Prepare User',username,'...'
if realm is not None:
authenticator = realm.lookupAuthenticationProvider("DefaultAuthenticator")
if authenticator.userExists(username)==1:
print '[Warning]User',username,'has been existed.'
else:
authenticator.createUser(username,password,description)
print '[INFO]User',username,'has been created successfully'


connect(adminUser,adminPwd,admServerUrl)

security=getMBean('/').getSecurityConfiguration()
realm=security.lookupRealm(realmName)
addUser(realm,'administrator','administrator123','MedRec Administrator')

disconnect()


Deploy the medrec application, deployMedRec.py:
#############################################################################
# Deploy MedRec for WLS 12c Tuning & Troubleshooting workshop
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2018-01-22
#
#############################################################################
# Modify these values as necessary
import os,sys, traceback
scriptName = sys.argv[0]
adminHost=os.environ["ADM_HOST"]
adminPort=os.environ["ADM_PORT"]
admServerUrl = 't3://'+adminHost+':'+adminPort
ttServerName=os.environ["TTSVR_NAME"]
adminUser='weblogic'
adminPwd='welcome1'
#
appName = 'medrec'
appSource = '../ear/medrec.ear'
#
# Deploy the application
def deployApplication(appName, appSource, targetServerName):
print 'Deploying application ' + appName + '.'
progress = deploy(appName=appName,path=appSource,targets=targetServerName)
# Wait for deploy to complete
while progress.isRunning():
pass
print 'Application ' + appName + ' deployed.'
#
#
def main():
# Connect to administration server
try:
connect(adminUser, adminPwd, admServerUrl)
#
deployApplication(appName, appSource, ttServerName)
#
print("\nExiting...")
exit()
except:
apply(traceback.print_exception, sys.exc_info())
exit(exitcode=1)
#call main()
main()


Quickly setup a persistent React application

Amis Blog - Tue, 2018-06-12 03:20

After having recently picked up the React framework, I figured I’d share how I quickly setup my projects to implement persistence. This way you spend minimal time setting up the skeleton of your application, letting you focus on adding functionality. This is done by combining a few tools. Let’s take a closer look.

Should you be interested in the code itself, go ahead and get it here.

Tools

NPM/Yarn

Be sure to install a package manager like npm or Yarn. We will use it to install and run some of the other tools.

Create-react-app
If you ever worked with React you undoubtedly came across this CLI tool. It is useful for providing the boilerplate code you need for the front-end of your application. Take a look at the documentation to get a sense on how to use it.

FeathersJS
This is an implementation of the Express framework. It provides another CLI tool we can use to quickly setup the boilerplate code for building a REST API. What makes it powerful is the concept of hooks (see this link for more) as well as the possibility to model our entities in a javascript file. After starting this service we can manipulate data in our database using REST calls.

MongoDB
This is the database we will use – be sure to download, install and start it. It’s a NoSQL database and uses a JSON document structure.

Steps
Create a folder for your application with <application name>. I like to create two seperate folders in here. One ‘frontend’, the other one ‘backend’. Usually I will add the source folder to a version control tool like Git. If you’re dealing with a larger application however, you might want to initiate two separate repositories for both your front- and backend. Using Visual Studio Code, my file structure looks like this:

Selection_002

Fire up your terminal. Install create-react-app globally.

npm install -g create-react-app

Navigate to your frontend folder. Initialize your first application.

cd frontend
create-react-app <application name>

Note how the CLI will add another folder with the application name into your frontend folder. Since we already created a root folder, copy the content of this folder and move it one level higher. Now the structure looks like this:

create-react-app

Install FeathersJS CLI globally.

npm install -g @feathersjs/cli

Navigate to the backend folder of your project an generate a feathers application

cd ../backend
feathers generate app

The CLI will prompt for a project name. Follow along with these settings:

feathers-settings

Alright, now it’s time to generate a service so we can start making REST calls. In true Oracle fashion, let’s make a service for the entity employees. While in the backend folder, run:

feathers generate services

Follow along:

feathers-service-settings

Navigate to the employees.model.js file. The filestructure of the backend folder should look like this:

feathers-backend-structure

In this file we can specify what employees look like. For now, let’s just give them the property name of type String and let’s make it required. Also, let’s delete the timestamps section to keep things clean. It should look like this:

//employees.model.js

module.exports = function (app) {
  const mongooseClient = app.get('mongooseClient');
  const { Schema } = mongooseClient;
  const employees = new Schema({
    name: { type: String, required: true }
  });

  return mongooseClient.model('employees', employees);
};

Great. We are almost good to go. First fire up MongoDB. Windows users see here. Unix users:

sudo service mongod start

Next up navigate to the backend folder. Using the terminal, give the following command:

npm start

You should see a message saying: Feathers application started on http://localhost:3030

The service we created is available on http://localhost:3030/employees. Navigate there. You should see JSON data – though right now all you’ll probably see is metadata. By using this address we can make REST calls to see and manipulate the data. You could use curl commands in the terminal, simply use the browser, or install a tool like Postman.

Next up we need a way to access this service in our frontend section of the application. For this purpose, let’s create a folder in the location: frontend/src/api. In here, create the file Client.js.

This file should contain the following code:

//Client.js

import io from 'socket.io-client';
import feathers from '@feathersjs/client';

const socket = io('http://localhost:3030');
const client = feathers();

client.configure(feathers.socketio(socket));
client.configure(feathers.authentication({
  storage: window.localStorage
}));

export default client;

Make sure the libraries we refer to are included in the package.json of our frontend. The package.json should look like this:

{
  "name": "react-persistent-start",
  "version": "0.1.0",
  "private": true,
  "dependencies": {
    "@feathersjs/client": "^3.4.4",
    "react": "^16.4.0",
    "react-dom": "^16.4.0",
    "react-scripts": "1.1.4",
    "socket.io-client": "^1.7.3"
  },
  "scripts": {
    "start": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test --env=jsdom",
    "eject": "react-scripts eject"
  }
}

Be sure to run npm install after updating this file.

That’s basically all we need to perform CRUD operations from our front-end section of the application. To make the example a little more vivid, let’s implement a button into the App.js page which was automatically generated by the create-react-app CLI.

First, I’ll get rid of the boilerplate code in there and I will import the client we just created. Next, create a button that calls method handleClick() when the React onClick method is fired. In here we will call the client, let it call the service and perform a create operation for us. The code will look like this:

//App.js

import React, { Component } from 'react';

import client from './api/Client';
import './App.css';

class App extends Component {
handleClick() {
  client.service('employees').create({
    name: "Nathan"
  });
}

  render() {
    return (
      <div className="App">
        <button onClick={this.handleClick}>Add employee</button>
      </div>
    );
  }
}

export default App;

Navigate to the frontend section in the terminal, make sure the node modules are installed correctly by running npm install. Now use command npm start. Navigate to http://localhost:3030/employees. Verify there is no employee here. On http://localhost:3000 our page should be available and show a single button.  Click the button and refresh the http://localhost:3030/employees page. You can see we have added an employee with the same name as yours truly.

json-result

(I use a Chrome plugin called JSON viewer to get the layout as shown in the picture)

Using the code provided in the handleClick() method you can expand upon this. See this link for all the calls you can make in order to provide CRUD functionality using FeathersJS.

That’s basically all you need to start adding functionality to your persistent React application.

Happy developing! The code is available here.

The post Quickly setup a persistent React application appeared first on AMIS Oracle and Java Blog.

Machine Learning Applied - TensorFlow Chatbot UI with Oracle JET Custom Component

Andrejus Baranovski - Mon, 2018-06-11 16:22
This post is based on Oracle Code 2018 Shenzhen, Warsaw and Berlin talks. View presentation on SlideShare:


In my previous post I have outlined how to build chatbot backend with TensorFlow - Classification - Machine Learning Chatbot with TensorFlow. Today post is the next step - I will explain how to build custom UI on top of TensorFlow chatbot with Oracle JET.

You can download complete source code (which includes TensorFlow part, backend for chatbot context processing and JET custom component chatbot UI) from my GitHub repository.

Here is solution architecture snapshot:


TensorFlow is used for machine learning and text classification task. Flask allows to communicate through REST to TensorFlow from outside. Contextual chatbot conversation processing is implemented in Node.js backend, communication with Oracle JET client is handled by Socket.io.

Key point in chatbot implementation - correct data structure construction for machine training process. More accurate learning will be, better classification results will be achieved afterwards. Chatbot training data can come in the form of JSON. Training data quality can be measured by overlap between intents and sample sentences. As more overlaps you have, weaker machine learning output will be produced and classification will be less accurate. Training data can contain information which is not used directly by TensorFlow - we can include intent context processing into the same structure, it will be used by context processing algorithm. Sample JSON structure for training data:


Accurate classification by TensorFlow is only one piece of chatbot functionality. We need to maintain conversation context. This can be achieved in Node.js backend, by custom algorithm. In my example, when context is not set - TensorFlow is called to classify statement and produce intent probability. There might be multiple intents classified for the same sentence - TensorFlow will return multiple probabilities. It is up to you, either to always choose intent with top probability or ask user to choose. Communication back to the client is handled through Socket.io by calling socket.emit function:


If context was already set, we don't call classification function - we don't need it in this step. Rather we check by intent context mapping - what should be the next step. Based on that information, we send back question or action to the client, again through Socket.io by calling socket.emit function:


Chatbot UI is implemented with JET custom component (check how it works in JET cookbook). This makes it easy to reuse the same component in various applications:


Here is example, when chatbot UI is included into consuming application. It comes with custom listener, where any custom actions are executed. Custom listener allows to move any custom logic outside of chatbot component, making it truly reusable:


Example for custom logic - based on chatbot reply, we can load application module, assign parameter values, etc.:


Chatbot UI implementation is based on the list, which renders bot and client messages using the template. This template detects if message belongs to client or bot and applies required style - this helps to render readable list. Also there is input area and control buttons:


JS module executes logic which helps to display bot message, by adding it to the list of messages generates event to be handled by custom logic listener. Message is sent from the client to the bot server by calling Socket.io socket.emit function:


Here is the final result - chatbot box implemented with Oracle JET:

what is read consistency

Tom Kyte - Mon, 2018-06-11 15:46
<i></i>Could you explain in your words what is Read consistence in Oracle 4.0 <i></i>
Categories: DBA Blogs

SESSION parameter shows different value

Tom Kyte - Mon, 2018-06-11 15:46
hi there As you guys suggested, last day i was trying to change process and session parameter values as follows, Everything gone perfectly. But after starting up the database, for the SESSION PARAMETER it shows me a value that i wasn't ex...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator