Feed aggregator

converting TIMESTAMP(6) to TIMESTAMP(0)

Tom Kyte - 14 hours 58 min ago
Currently I have a column with datatype TIMESTAMP(6) but now i have a requirement to change it to TIMESTAMP(0). Because we cannot decrease the precision, ORA-30082: datetime/interval column to be modified must be empty to decrease fractional sec...
Categories: DBA Blogs

Mail Restrictions using UTL_SMTP

Tom Kyte - 14 hours 58 min ago
Hi Tom, I have a requirement to send email to particular domain mail id?s. But My Mail server is global mail server we can send mail to any mail ids. Is there any options in Oracle to restrict the mail send as global. For example: My mail host is...
Categories: DBA Blogs

Expanded Oracle Accelerator Gives Texas Startups a Boost

Oracle Press Releases - 17 hours 24 min ago
Press Release
Expanded Oracle Accelerator Gives Texas Startups a Boost New Austin-based program offers enterprise customer network, mentoring, resources and cloud technology, as well as Capital Factory collaboration, to help startups grow and compete globally

Redwood Shores, Calif.—Jun 22, 2018

Oracle today announced the opening of the Oracle Startup Cloud Accelerator in Austin, Texas, the global program’s first U.S. location and part of the Oracle Global Startup Ecosystem. The new accelerator provides statewide startups with access to a network of more than 430,000 Oracle customers, technical and business mentors, state-of-the-art technology, co-working space at Capital Factory, introductions to partners, talent, and investors, and free Oracle Cloud credits. In addition to local expertise, the program offers an ever-expanding global community of startup peers and program alumni.

Austin Startup Cloud Accelerator

The Oracle Startup Cloud Accelerator, which is open to early-stage technology and technology-enabled startups, is accepting applications through August 7. Startups will begin the six-month program in early September.

Oracle’s Austin Startup Cloud Accelerator is run by JD Weinstein. Weinstein, a former Principal at WPP Ventures and previously a Venture Associate at Capital Factory, brings a deep understanding of the local startup ecosystem and scaling startups through enterprise relationships.

“Austin and the State of Texas are thriving centers of innovation, and we are proud to dive in and support the startup community with cutting edge resources, including enterprise customer channels, hands-on experience with Oracle technical and product teams, mentoring from top business leaders, executives, and investors, as well as connections to thousands of entrepreneurs and corporate partners through our collaboration with Capital Factory,” said JD Weinstein, head of Oracle Startup Ecosystem in Austin.

Capital Factory & Austin Network

Oracle is working with Capital Factory to provide connections to the organization’s expansive network of local entrepreneurs, prominent CEOs, venture capitalists, corporations, and government officials. Startups in Oracle’s accelerator will also receive access to Capital Factory’s Mentor Network, free co-working space, and will benefit from the reach of the organization’s social media and event communities. Members of Oracle’s broader Global Startup Ecosystem will also benefit from the relationship with Capital Factory.

“We are excited that Oracle has invested in Austin as the first U.S. location of its global accelerator,” said Joshua Baer, founder and executive director, Capital Factory. “The combination of our mentor network and Oracle’s cloud platform and customer connections will provide startups a major advantage in growing their business.”

The Startup Cloud Accelerator is supported by Oracle’s rapidly growing presence in Austin. The company recently opened a state-of-the-art campus on Lady Bird Lake. Oracle’s expanding employee base and the new facility will provide additional resources and support for startups in the accelerator program.

Commitment to Global Startups

“Rooted in its own entrepreneurial beginnings, Oracle has long believed that startups are at the heart of innovation,” said Reggie Bradford, senior vice president, Oracle Startup Ecosystem and Accelerator. “The Austin accelerator is key to our mission of creating a global ecosystem of co-development and co-innovation where everyone—the startups, customers, and Oracle—can win.”

The Oracle Global Startup Ecosystem offers residential and nonresidential startup programs, plus a burgeoning higher education program, that power cloud-based technology innovation. The residential Oracle Startup Cloud Accelerator has locations in Austin, Bangalore, Bristol, Delhi–NCR, Mumbai, Paris, São Paulo, Singapore and Tel Aviv. Oracle Scaleup Ecosystem is the nonresidential, virtual-style program available for growing companies around the globe. Interested startups, venture capital firms and other organizations, regardless of their location, can apply for Oracle Scaleup Ecosystem here.

Contact Info
Julia Allyn
Oracle Corporate Communications
+1.650.607.1338
julia.allyn@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Julia Allyn

  • +1.650.607.1338

MySQL 8.0 – Roles are finally there

Yann Neuhaus - 17 hours 55 min ago

Roles have been existing on many RDBMS for a long time by now. Starting from version 8.0, this functionality is finally there for MySQL.
The most important advantage is to define only once a role that includes a “set of permissions”, then assign it to each user, avoiding wasting time declaring them individually.

In MySQL, a role can be created like a user, but without the “identified by” clause and without login:

mysqld2-(root@localhost) [(none)]> CREATE ROLE 'r_sakila_read';
Query OK, 0 rows affected (0.03 sec)
mysqld2-(root@localhost) [(none)]> select user,host,authentication_string from mysql.user;
+------------------+-----------+------------------------------------------------------------------------+
| user             | host      | authentication_string                                                  |
+------------------+-----------+------------------------------------------------------------------------+
| r_sakila_read    | %         |                                                                        |
| multi_admin      | localhost | $A$005$E?D/>efE+Rt12omzr.78VnfR3kxj8KLG.aP84gdPMxW7A/7uG3D80B          |
| mysql.infoschema | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE                              |
| mysql.session    | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE                              |
| mysql.sys        | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE                              |
| root             | localhost | {u]E/m)qyn3YRk2u.JKdxj9/6Krd8uqNtHRzKA38cG5qyC3ts5                     |
+------------------+-----------+------------------------------------------------------------------------+

After that you can grant some privileges to this role, as you usually do for users:

mysqld2-(root@localhost) [(none)]> grant select on sakila.* to 'r_sakila_read';
Query OK, 0 rows affected (0.10 sec)
mysqld2-(root@localhost) [(none)]> show grants for r_sakila_read;
+---------------------------------------------------+
| Grants for r_sakila_read@%                        |
+---------------------------------------------------+
| GRANT USAGE ON *.* TO `r_sakila_read`@`%`         |
| GRANT SELECT ON `sakila`.* TO `r_sakila_read`@`%` |
+---------------------------------------------------+
2 rows in set (0.00 sec)

Now you can create your user:

mysqld2-(root@localhost) [(none)]> create user 'u_sakila1'@localhost identified by 'qwepoi123098';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

And yes, check your password policy because, starting from version 8.0, the new validate_password component replaces the old validate_password plugin and is now enabled by default and you don’t have to install it anymore.

mysqld2-(root@localhost) [(none)]> show variables like 'validate_password_%';
+--------------------------------------+--------+
| Variable_name                        | Value  |
+--------------------------------------+--------+
| validate_password_check_user_name    | ON     |
| validate_password_dictionary_file    |        |
| validate_password_length             | 8      |
| validate_password_mixed_case_count   | 1      |
| validate_password_number_count       | 1      |
| validate_password_policy             | MEDIUM |
| validate_password_special_char_count | 1      |
+--------------------------------------+--------+
7 rows in set (0.01 sec)
mysqld2-(root@localhost) [(none)]> create user 'u_sakila1'@localhost identified by 'QwePoi123098!';
Query OK, 0 rows affected (0.08 sec)

In my example I have by default a MEDIUM level for checking passwords which means “Length; numeric, lowercase/uppercase, and special characters” (I will better talk about validate_password component in an upcoming blog). Let’s go back to roles…

Grant the created role to your created user (as you usually grant a privilege):

mysqld2-(root@localhost) [(none)]> grant 'r_sakila_read' to 'u_sakila1'@localhost;
Query OK, 0 rows affected (0.01 sec)
mysqld2-(root@localhost) [(none)]> flush privileges;
Query OK, 0 rows affected (0.02 sec)

At this point if you check privileges of your user through a USING clause, you will get information about the granted roles and also privileges associated with each role:

mysqld2-(root@localhost) [(none)]> show grants for 'u_sakila1'@localhost using 'r_sakila_read';
+-------------------------------------------------------+
| Grants for u_sakila1@localhost                        |
+-------------------------------------------------------+
| GRANT USAGE ON *.* TO `u_sakila1`@`localhost`         |
| GRANT SELECT ON `sakila`.* TO `u_sakila1`@`localhost` |
| GRANT `r_sakila_read`@`%` TO `u_sakila1`@`localhost`  |
+-------------------------------------------------------+
3 rows in set (0.00 sec)

Now if you try to connect with your user and do a select of data on the database on which you have a read privilege, you will discover that something is still missing:

mysqld2-(root@localhost) [(none)]>  system mysql -u u_sakila1 -p
mysqld2-(u_sakila1@localhost) [(none)]> use sakila;
ERROR 1044 (42000): Access denied for user 'u_sakila1'@'localhost' to database 'sakila'
mysqld2-(u_sakila1@localhost) [(none)]> SELECT CURRENT_ROLE();
+----------------+
| CURRENT_ROLE() |
+----------------+
| NONE           |
+----------------+
1 row in set (0.00 sec)

Why?
Because you have to define which roles will be active when the user authenticates. You you can do that by adding the “DEFAULT ROLE role” during the user creation (starting from version 8.0.3), or even later through the following statement:

mysqld2-(root@localhost) [(none)]> set default role r_sakila_read to 'u_sakila1'@localhost;
Query OK, 0 rows affected (0.08 sec)

Otherwise, starting from version 8.0.2, you can directly let the server activate by default all roles granted to each user, setting the activate_all_roles_on_login variable to ON:

mysqld2-(root@localhost) [(none)]> show variables like '%activate%';
+-----------------------------+-------+
| Variable_name               | Value |
+-----------------------------+-------+
| activate_all_roles_on_login | OFF   |
+-----------------------------+-------+
1 row in set (0.00 sec)
mysqld2-(root@localhost) [(none)]> set global activate_all_roles_on_login=ON;
Query OK, 0 rows affected (0.00 sec)
mysqld2-(root@localhost) [(none)]> show variables like '%activate%';
+-----------------------------+-------+
| Variable_name               | Value |
+-----------------------------+-------+
| activate_all_roles_on_login | ON    |
+-----------------------------+-------+
1 row in set (0.01 sec)

So if you check again, all works correctly:

mysqld2-(root@localhost) [mysql]> select * from role_edges;
+-----------+----------------+-----------+-----------+-------------------+
| FROM_HOST | FROM_USER      | TO_HOST   | TO_USER   | WITH_ADMIN_OPTION |
+-----------+----------------+-----------+-----------+-------------------+
| %         | r_sakila_read  | localhost | u_sakila1 | N                 |
+-----------+----------------+-----------+-----------+-------------------+
4 rows in set (0.00 sec)
mysqld2-(root@localhost) [(none)]>  system mysql -u u_sakila1 -p
mysqld2-(u_sakila1@localhost) [(none)]> use sakila
mysqld2-(u_sakila1@localhost) [sakila]> connect
Connection id:    29
Current database: sakila
mysqld2-(u_sakila1@localhost) [sakila]> select CURRENT_ROLE();
+---------------------+
| CURRENT_ROLE()      |
+---------------------+
| `r_sakila_read`@`%` |
+---------------------+
1 row in set (0.00 sec)

Enjoy your roles now! ;)

 

Cet article MySQL 8.0 – Roles are finally there est apparu en premier sur Blog dbi services.

What’s new in EDB EFM 3.1?

Yann Neuhaus - 18 hours 59 min ago

Beginning of this month EnterpriseDB announced a new version of its Failover Manager. Version 2.1 introduced controlled switchover operations, version 3.0 brought support for PostgreSQL 10 and now: What’s new in version 3.1? It might seem this is just a bugfix release but there is more and especially one enhancement I’ve waited for a long time.

As you might remember: When you stopped EFM (before version 3.1) the nodes.in file was always empty again. What we usually did is to create a backup of that file so we just could copy it back but this is somehow annoying. The current version comes with a new property in the efm.properties file to handle that better:

# When set to true, EFM will not rewrite the .nodes file whenever new nodes
# join or leave the cluster. This can help starting a cluster in the cases
# where it is expected for member addresses to be mostly static, and combined
# with 'auto.allow.hosts' makes startup easier when learning failover manager.
stable.nodes.file=true

When set to “true” the file will not be touched when you stop/restart EFM on a node:

root@:/etc/edb/efm/ [] cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.60:9998 192.168.22.61:9998 
root@:/etc/edb/efm/ [] systemctl stop efm-3.1.service
root@:/etc/edb/efm/ [] cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.60:9998 192.168.22.61:9998 
root@:/etc/edb/efm/ [] systemctl start efm-3.1.service
root@:/etc/edb/efm/ [] cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.60:9998 192.168.22.61:9998 

A small, but really nice improvement. At least with our deployments the amount of cluster nodes is rather static so this helps a lot. While this is a new property another property is gone:

root@:/etc/edb/efm/ [] grep efm.license efm.properties

This means you do not anymore need a license key to test EFM for more than 60 days, which is great as well. Another small improvement is that you now can see on which node the VIP is currently running on:

root@:/etc/edb/efm/ [] /usr/edb/efm/bin/efm cluster-status efm
Cluster Status: efm

	Agent Type  Address              Agent  DB       VIP
	-----------------------------------------------------------------------
	Master      192.168.22.60        UP     UP       192.168.22.63*
	Standby     192.168.22.61        UP     UP       192.168.22.63

Allowed node host list:
	192.168.22.60 192.168.22.61

Membership coordinator: 192.168.22.61

Standby priority host list:
	192.168.22.61

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.60        0/40006F0        
	Standby     192.168.22.61        0/40006F0        

	Standby database(s) in sync with master. It is safe to promote.

When it comes to the VIP there is another enhancement which is controlled by new property:

root@:/etc/edb/efm/ [] grep virtualIp.single efm.properties | tail -1
virtualIp.single=true

When this is set to “true” EFM will use the same address for the VIP after a failover on the new master. This was the default behavior before EFM 3.1. When you want to use another VIP on a new master you can now do that be switching that to false and provide a different VIP in the properties file on each node.

That’s the important ones for me. The full list is in the documentation.

 

Cet article What’s new in EDB EFM 3.1? est apparu en premier sur Blog dbi services.

utl_dbws causes ORA-29532 and bad_record_mac

Yann Neuhaus - 19 hours 57 min ago

After installing OJVM patch set update APR-2017 on a 11.2.0.4 database with PSU APR-2017 installed, first call of utl_dbws package was successful, but after a while utl_dbws calls failed always with ORA-29532 and bad_record_mac. All Java objects remained valid.
Also after trying procedures described in MOS document 2314363.1 utl_dbws worked first time, after that it always failed.
We could observe that after a while after restarting database m000 process ran and tried to recompile Java classes. When waiting till m000 finished utl_dbws always succeeded.
The m000 process start was caused by parameter setting JAVA_JIT_ENABLED to TRUE.

When setting JAVA_JIT_ENABLED to false, utl_dbws always worked fine. Probably locking of java classes by application prevented to recompile them properly.

 

Cet article utl_dbws causes ORA-29532 and bad_record_mac est apparu en premier sur Blog dbi services.

Oracle WebLogic 12.2.1.x Configuration Guide for Oracle Utilities available

Anthony Shorten - Thu, 2018-06-21 19:06

A new guide whitepaper is now available for use with Oracle Utilities Application Framework based products that support Oracle WebLogic 12.2.1.x and above. The whitepaper walks through the setup of the domain using the Fusion Domain Templates instead of the templates supplied with the product. In future releases, Oracle Utilities Application Framework the product specific domain templates will not be supplied as the Fusion Domain Templates take more of a prominent role in deploying Oracle Utilities products.

The whitepaper covers the following topics:

  • Setting up the Domain for Oracle Utilities products
  • Additional Web Services configuration
  • Configuration of Global Flush functionality in Oracle WebLogic 12.2.1.x
  • Frequently asked installation questions

The whitepaper is available as Oracle WebLogic 12.2.1.x Configuration Guide (Doc Id: 2413918.1) from My Oracle Support.

JavaScript - Method to Call Backend Logic in Sequential Loop

Andrejus Baranovski - Thu, 2018-06-21 15:54
When we call backend REST service from JavaScript - call by default is executed async. This means it will not wait until response from backend is received, but will continue executing code. This is expected and desired functionality in most of the cases. But it might be requirement, where you want to call backend in synchronised way. Example - calling backend service multiple times in the loop, next call must be invoked only after previous call is complete. With default async functionality, loop will complete before first REST call.

Here is the example of calling backend REST service (through Oracle JET API, using JQuery in the background). Call is made 3 times, with success callback printing a message. One more message is printed at the end of each loop iteration:


Three backend REST calls are executed in the loop:


Loop completes earlier than REST call from the first iteration, we can see it from the log:


It might be valid and expected behaviour for most of the cases. But depending on backend logic, may be you would like to guarantee no call from second iteration will be invoked, until first iteration call not complete. This can be achieved by specifying async function and using Promise inside the loop. We should use await new Promise syntax and resolve it in success callback by calling next():


With promise applied, loop is executed sequentially - next loop iteration is started, only after backend service success callback is invoked. You can see it from the log:


Source code is available on my GutHub repository.

Unbreakable Enterprise Kernel Release 5 for Oracle Linux 7

Wim Coekaerts - Thu, 2018-06-21 10:08

Yesterday we released the 5th version of our "UEK" package for Oracle Linux 7 (UEKR5). This kernel version is based on a 4.14.x mainline Linux kernel. One of the nice things is that 4.14 is an upstream Long Term Stable kernel version as well as maintained by gregkh.

UEKR5 is a 64-bit only kernel. We released it on x86(-64) and ARM64 (aarch64) and it is supported starting with Oracle Linux 7.

Updating to UEK5 is easy - just add the UEKR5 yum repo and update. We have some release notes posted here and a more detailed blog here.

A lot of new stuff  in UEKR5... we also put a few extra tools in the yum repo that let you make use of these newer features where tool updates are needed. xfsprogs, btrfsprogs, ixpdimm libraries pmemsdk, updated dtrace utils updated bcache, updated iproute etc.

For those that don't remember, we launched the first version of our kernel for Oracle Linux back in 2010 when we launched the 8 socket Exadata system. We have been releasing a new Linux kernel for Oracle Linux on a regular basis ever since. Every Exadata system, in fact every Oracle Engineered system that runs Linux uses Oracle Linux and uses one of the versions of UEK inside. So for customers, it's the most tested kernel out there, you can run the exact same OS software stack as we run, on our biggest and fastest database servers, on-premises or in the cloud, and in fact, run the exact same OS software stack as we run inside Oracle Cloud in general. That's pretty unique compared to other vendors where the underlying stack is a black box. Not here.

10/2010 - 2.6.32 [UEK] OL5/OL6 03/2012 - 2.6.39 [UEKR2] OL5/OL6 10/2013 - 3.8 [UEKR3] OL6/OL7 01/2016 - 4.1 [UEKR4] OL6/OL7 06/2018 - 4.14 [UEKR5] OL7/

The source code for UEKR5 (as has been the case since day 0) is fully available publicly, the entire git repo is there with changelog, all the patches are there with all the changelog history - not just some tar file with patchfiles on top of tar files to obfuscate? things for some reason. It's all just -right there-. In fact we recently even moved our kernel gitrepo to github.

Have at it.

 

Demo: GraphQL with node-oracledb

Christopher Jones - Thu, 2018-06-21 09:18

Some of our node-oracledb users recently commented they have moved from REST to GraphQL so I thought I'd take a look at what it is all about.

I can requote the GraphQL talking points with the best of them, but things like "Declarative Data Fetching" and "a schema with a defined type system is the contract between client and server" are easier to undstand with examples.

In brief, GraphQL:

  • Provides a single endpoint that responds to queries. No need to create multiple endpoints to satisfy varying client requirements.

  • Has more flexibility and efficiency than REST. Being a query language, you can adjust which fields are returned by queries, so less data needs to be transfered. You can parameterize the queries, for example to alter the number of records returned - all without changing the API or needing new endpoints.

Let's look at the payload of a GraphQL query. This query with the root field 'blog' asks for the blog with id of 2. Specifically it asks for the id, the title and the content of that blog to be returned:

{ blog(id: 2) { id title content } }

The response from the server would contain the three request fields, for example:

{ "data": { "blog": { "id": 2, "title": "Blog Title 2", "content": "This is blog 2" } } }

Compare that result with this query that does not ask for the title:

{ blog(id: 2) { id content } }

With the same data, this would give:

{ "data": { "blog": { "id": 2, "content": "This is blog 2" } } }

So, unlike REST, we can choose what data needs to be transferred. This makes clients more flexible to develop.

Let's looks at some code. I came across this nice intro blog post today which shows a basic GraphQL server in Node.js. For simplicity its data store is an in-memory JavaScript object. I changed it to use an Oracle Database backend.

The heart of GraphQL is the type system. For the blog example, a type 'Blog' is created in our Node.js application with three obvious values and types:

type Blog { id: Int!, title: String!, content: String! }

The exclamation mark means a field is required.

The part of the GraphQL Schema to query a blog post by id is specified in the root type 'Query':

type Query { blog(id: Int): Blog }

This defines a capability to query a single blog post and return the Blog type we defined above.

We may also want to get all blog posts, so we add a "blogs" field to the Query type:

type Query { blog(id: Int): Blog blogs: [Blog], }

The square brackets indicates a list of Blogs is returned.

A query to get all blogs would be like:

{ blogs { id title content } }

You can see that the queries include the 'blog' or 'blogs' field. We can pass all queries to the one endpoint and that endpoint will determin how to handle each. There is no need for multiple endpoints.

To manipulate data requires some 'mutations', typically making up the CUD of CRUD:

input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! }

To start with, the "input" type allows us to define input parameters that will be supplied by a client. Here a BlogEntry contains just a title and content. There is no id, since that will be automatically created when a new blog post is inserted into the database.

In the mutations, you can see a BlogEntry type is in the argument lists for the createBlog and updateBlog fields. The deleteBlog field just needs to know the id to delete. The mutations all return a Blog. An example of using createBlog is shown later.

Combined, we represent the schema in Node.js like:

const typeDefs = ` type Blog { id: Int!, title: String!, content: String! } type Query { blogs: [Blog], blog(id: Int): Blog } input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! }`;

This is the contract, defining the data types and available operations.

In the backend, I decided to use Oracle Database 12c's JSON features. There's no need to say that using JSON gives developers power to modify and improve the schema during the life of an application:

CREATE TABLE blogtable (blog CLOB CHECK (blog IS JSON)); INSERT INTO blogtable VALUES ( '{"id": 1, "title": "Blog Title 1", "content": "This is blog 1"}'); INSERT INTO blogtable VALUES ( '{"id": 2, "title": "Blog Title 2", "content": "This is blog 2"}'); COMMIT; CREATE UNIQUE INDEX blog_idx ON blogtable b (b.blog.id); CREATE SEQUENCE blog_seq START WITH 3;

Each field of the JSON strings corresponds to the values of the GraphQL Blog type. (The 'dotted' notation syntax I'm using in this post requires Oracle DB 12.2, but can be rewritten for 12.1.0.2.)

The Node.js ecosystem has some powerful modules for GraphQL. The package.json is:

{ "name": "graphql-oracle", "version": "1.0.0", "description": "Basic demo of GraphQL with Oracle DB", "main": "graphql_oracle.js", "keywords": [], "author": "christopher.jones@oracle.com", "license": "MIT", "dependencies": { "oracledb": "^2.3.0", "express": "^4.16.3", "express-graphql": "^0.6.12", "graphql": "^0.13.2", "graphql-tools": "^3.0.2" } }

If you want to see the full graphql_oracle.js file it is here.

Digging into it, the application has some 'Resolvers' to handle the client calls. From Dhaval Nagar's demo, I modified these resolvers to invoke new helper functions that I created:

const resolvers = { Query: { blogs(root, args, context, info) { return getAllBlogsHelper(); }, blog(root, {id}, context, info) { return getOneBlogHelper(id); } }, [ . . . ] };

To conclude the GraphQL part of the sample, the GraphQL and Express modules hook up the schema type definition from above with the resolvers, and start an Express app:

const schema = graphqlTools.makeExecutableSchema({typeDefs, resolvers}); app.use('/graphql', graphql({ graphiql: true, schema })); app.listen(port, function() { console.log('Listening on http://localhost:' + port + '/graphql'); })

On the Oracle side, we want to use a connection pool, so the first thing the app does is start one:

await oracledb.createPool(dbConfig);

The helper functions can get a connection from the pool. For example, the helper to get one blog is:

async function getOneBlogHelper(id) { let sql = 'SELECT b.blog FROM blogtable b WHERE b.blog.id = :id'; let binds = [id]; let conn = await oracledb.getConnection(); let result = await conn.execute(sql, binds); await conn.close(); return JSON.parse(result.rows[0][0]); }

The JSON.parse() call nicely converts the JSON string that is stored in the database into the JavaScript object to be returned.

Starting the app and loading the endpoint in a browser gives a GraphiQL IDE. After entering the query on the left and clicking the 'play' button, the middle pane shows the returned data. The right hand pane gives the API documentation:

To insert a new blog, the createBlog mutation can be used:

If you want to play around more, I've put the full set of demo-quality files for you to hack on here. You may want to look at the GraphQL introductory videos, such as this comparison with REST.

To finish, GraphQL has the concept of real time updates with subscriptions, something that ties in well with the Continous Query Notification feature of node-oracledb 2.3. Yay - something else to play with! But that will have to wait for another day. Let me know if you beat me to it.

Oracle Introduces New Java SE Subscription Offering for Broader Enterprise Java Support

Oracle Press Releases - Thu, 2018-06-21 09:00
Press Release
Oracle Introduces New Java SE Subscription Offering for Broader Enterprise Java Support Java SE Subscription Provides Licensing and Support for Java SE on Servers, Desktops, and Cloud Deployments

Redwood Shores Calif—Jun 21, 2018

In order to further support the millions of worldwide businesses running Java in production, Oracle today announced Java SE Subscription, a new subscription model that covers all Java SE licensing and support needs. Java SE Subscription removes enterprise boardroom concerns around mission critical, timely, software performance, stability and security updates. Java SE Subscription complements Oracle’s long-standing and continued free Java SE releases and stewardship of the OpenJDK ecosystem where Oracle now produces open source OpenJDK binaries, enabling developers and organizations that do not need commercial support or enterprise management tools.

Java SE Subscription provides commercial licensing, including commercial features and tools such as the Java Advanced Management Console to identify, manage and tune Java SE desktop use across the enterprise. It also includes Oracle Premier Support for current and previous Java SE versions.  For further details please visit FAQ list at: http://www.oracle.com/technetwork/java/javaseproducts/overview/javasesubscriptionfaq-4891443.html

“Companies want full flexibility over when and how they update their production applications.” Said Georges Saab, VP Java Platform Group at Oracle. “Oracle is the world’s leader in providing both open source and commercially supported Java SE innovation, stability, performance and security updates for the Java Platform. Our long-standing investment in Java SE ensures customers get predictable and timely updates.”

“The subscription model for updates and support has been long established in the Linux ecosystem. Meanwhile people are increasingly used to paying for services rather than products.” said James Governor, analyst and co-founder of RedMonk. “It’s natural for Oracle to offer a monthly Java SE subscription to suit service-based procurement models for enterprise customers.”

"At Gluon we are strong believers in commercial support offerings around open source software, as it enables organizations to continue to produce software, and the developer community to ensure that they have access to the source code." said Johan Vos, Co-founder and CTO of Gluon. "Today's announcement from Oracle ensures those in the Java Community that need an additional level of support can receive it, and ensures that Java developers can still leverage the open-source software for creating their software. The Java SE Subscription model from Oracle is complementary to how companies like Gluon tailor their solutions around Java SE, Java EE and JavaFX on mobile, embedded and desktop."

To learn more about Java SE Subscription, please visit https://www.oracle.com/java/java-se-subscription.html. Java is the world’s most popular programming language, with over 12 million developers running Java. Java is also the #1 developer choice for cloud, with over 21 billion cloud-connected Java virtual machines.

Contact Info
Alex Shapiro
Oracle
+1 415-608-5044
alex.shapiro@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Alex Shapiro

  • +1 415-608-5044

Kscope18: It's a Wrap!

Rittman Mead Consulting - Thu, 2018-06-21 08:23
 It's a Wrap!

As announced few weeks back I represented Rittman Mead at ODTUG's Kscope18 hosted in the magnificent Walt Disney World Dolphin Resort. It's always hard to be credible when telling people you are going to Disneyworld for work but Kscope is a must-go event if you are in the Oracle landscape.

 It's a Wrap!

In the Sunday symposium Oracle PMs share hints about the products latest capabilities and roadmaps, then three full days of presentations spanning from the traditional Database, EPM and BI tracks to the new entries like Blockchain. On top of this the opportunity to be introduced to a network of Oracle experts including Oracle ACEs and Directors, PMs and people willing to share their experience with Oracle (and other) tools.

Sunday Symposium and Presentations

I attended the Oracle Analytics (BI and Essbase) Sunday Symposium run by Gabby Rubin and Matt Milella from Oracle. It was interesting to see the OAC product enhancements and roadmap as well as the feature catch-up in the latest release of OBIEE on-premises (version 12.2.1.4.0).

As expected, most of the push is towards OAC (Oracle Analytics Cloud): all new features will be developed there and eventually (but assurance on this) ported in the on-premises version. This makes a lot of sense from Oracle's point of view since it gives them the ability to produce new features quickly since they need to be tested only against a single set of HW/SW rather than the multitude they are supporting on-premises.

Most of the enhancements are expected in the Mode 2/Self Service BI area covered by Oracle Analytics Cloud Standard since a) this is the overall trend of the BI industry b) the features requested by traditional dashboard style reporting are well covered by OBIEE.
The following are just few of the items you could expect in future versions:

  • Recommendations during the data preparation phase like GeoLocation and Date enrichments
  • Data Flow enhancements like incremental updates or parametrized data-flows
  • New Visualizations and in general more control over the settings of the single charts.

In general Oracle's idea is to provide a single tool that meets both the needs of Mode 1 and Mode 2 Analytics (Self Service vs Centralized) rather than focusing on solving one need at a time like other vendors do.

Special mention to the Oracle Autonomous Analytics Cloud, released few weeks ago, that differs from traditional OAC for the fact that backups, patching and service monitoring are now managed automatically by Oracle thus releasing the customer from those tasks.

During the main conference days (mon-wed) I assisted a lot of very insightful presentations and the Oracle ACE Briefing which gave me ideas for future blog posts, so stay tuned! As written previously I had two sessions accepted for Kscope18: "Visualizing Streams" and "DevOps and OBIEE: Do it Before it's too late", in the following paragraph I'll share details (and link to the slides) of both.

Visualizing Streams

One of the latest trends in the data and analytics space is the transition from the old style batch based reporting systems which by design were adding a delay between the event creation and the appearance in the reporting to the concept of streaming: ingesting and delivering event information and analytics as soon as the event is created.

 It's a Wrap!

The session explains how the analytics space changed in recent times providing details on how to setup a modern analytical platform which includes streaming technologies like Apache Kafka, SQL based enrichment tools like Confluent's KSQL and connections to Self Service BI tools like Oracle's Data Visualization via sql-on-Hadoop technologies like Apache Drill. The slides of the session are available here.

DevOps and OBIEE: Do it Before it's Too Late

In the second session, slides here, I've been initially going through the motivations of applying DevOps principles to OBIEE: the self service BI wave started as a response to the long time to delivery associated with the old school centralized reporting projects. Huge monolithic sets of requirements to be delivered, no easy way to provide development isolation, manual testing and code promotion were only few of the stoppers for a fast delivery.

 It's a Wrap!

After an initial analysis of the default OBIEE developments methods, the presentation explains how to apply DevOps principles to an OBIEE (or OAC) environment and precisely:

  • Code versioning techniques
  • Feature-driven environment creation
  • Automated promotion
  • Automated regression testing

Providing details on how the Rittman Mead BI Developer Toolkit, partially described here, can act as an accelerator for the adoption of these practices in any custom OBIEE implementation and delivery process.

As mentioned before, the overall Kscope experience is great: plenty of technical presentation, roadmap information, networking opportunities and also much fun! Looking forward to Kscope19 in Seattle!

Categories: BI & Warehousing

Intercollegiate Tennis Association and Oracle Announce Multi-Year Extension

Oracle Press Releases - Thu, 2018-06-21 07:00
Press Release
Intercollegiate Tennis Association and Oracle Announce Multi-Year Extension

TEMPE, Ariz. and Redwood Shores, Calif.—Jun 21, 2018

The Intercollegiate Tennis Association and Oracle are excited to announce a multi-year extension to their alliance, as Oracle continues to strengthen its ongoing commitment to collegiate tennis.

The Oracle ITA alliance includes Oracle’s ongoing sponsorship of the Oracle ITA Collegiate Tennis Rankings, the Oracle ITA Masters and Oracle ITA National Fall Championships, while adding title sponsorships to the ITA Summer Circuit (now branded as the Oracle ITA Summer Circuit Powered By UTR) and the Division I and Division III National Team Indoor Championships.

“Our partnership with ITA has been a great success to date, and we’re eager to keep expanding the game,” said Oracle CEO Mark Hurd. “We want to ensure that young players understand that collegiate tennis offers terrific opportunities to improve their games, play in great venues in a team environment, all while getting an education that will serve them well for the rest of their lives.”

ITA CEO Timothy Russell added, “The ITA is thrilled to be continuing our wonderful working relationship with Oracle; an incredibly innovative company with an astonishing forward-thinking CEO. Both parties are committed to positively shaping the future of college tennis. Oracle’s attention to creating events of high distinction, in which the best players in college want to participate and fans want to watch, either in person or from the comfort of their own home via television and live streaming, is elevating our game.”

The newly-christened Oracle ITA Summer Circuit Powered by UTR will serve as a model for level-based play in the near 50 tournaments contested during the Summer Circuit’s six-week duration. The Oracle ITA Summer Circuit Powered by UTR, which began in 1993, provides college tennis players, along with junior players, alumni and young aspiring professionals, the opportunity to compete in organized events during the summer months. For the third consecutive year, the Oracle ITA Summer Circuit Powered by UTR will feature nearly 50 tournaments across 23 different states, during a six-week stretch from late June to the end of July. The circuit will culminate at the ITA National Summer Championships, hosted by TCU from August 10-14, which will feature prize money for the first time.

“The ITA Summer Circuit is yet another great opportunity to influence the quality of American tennis and Oracle is excited to play a part in it,” said Hurd. “The summer circuit is the ideal opportunity for all players, from collegians to juniors, to play competitively year-round.”

Oracle will now have an expanded presence in the dual-match portion of the college tennis schedule by becoming the title sponsor of all four National Team Indoor Championships. Contested during the months of February and March, the Oracle ITA National Team Indoor Championships feature 16 of the nation’s top men’s and women’s teams from Division I, and eight highly-ranked men’s and women’s Division III teams vying for a national indoor title.

“We are excited that Oracle will serve as the title sponsor for the National Team Indoor Championships,” said Russell. “The National Team Indoor Championships feature elite fields and stand as a good season-opening barometer for how the dual-match season will play out.”

Serving as the culmination to the fall season, the Oracle ITA National Fall Championships will take place November 1-5, 2018, at the Surprise Tennis & Racquet Complex in Surprise, Arizona, which recently hosted the 2018 NCAA Division II National Championships and previously hosted the 2016 ITA Small College Championships.

The Oracle ITA National Fall Championships features 128 of the nation’s top collegiate singles players (64 men and 64 women) and 64 doubles teams (32 men’s team and 32 women’s teams). In its second year, having replaced the ITA National Indoor Intercollegiate Championships, it is the lone event on the collegiate tennis calendar to feature competitors from all five divisions playing in the same tournament.

Created in 2015, the Oracle ITA Masters has established itself as one of the premier events of the collegiate tennis season. The Oracle ITA Masters features singles draws of 32 for men and women, and a mixed doubles event with a 32-draw. Players are chosen based upon conference representation, similar to the NCAA Tournament.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
212-508-7935
deborah.hellinger@oracle.com
Dan Johnson
ITA Marketing and Communications
303-579-4878
djohnson@itatennis.com
About the ITA

The Intercollegiate Tennis Association (ITA) is committed to serving college tennis and returning the leaders of tomorrow. As the governing body of college tennis, the ITA oversees women’s and men’s varsity tennis at NCAA Divisions I, II and III, NAIA and Junior/Community College divisions. The ITA administers a comprehensive awards and rankings program for men's and women’s varsity players, coaches and teams in all divisions, providing recognition for their accomplishments on and off the court. For more information on the ITA, visit the ITA website at www.itatennis.com, like the ITA on Facebook or follow @ITA_Tennis on Twitter and Instagram.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • 212-508-7935

Dan Johnson

  • 303-579-4878

Oracle Partner PaaS Summer Camps VIII - August 27 - 31, 2018

The Oracle PaaS Summer Camp is a one week training for cutting-edge software consultants, engineers and enterprise-level professionals. The #PaaSSummerCamp brings together the world’s leading...

We share our skills to maximize your revenue!
Categories: DBA Blogs

DBA_HIST_SQLSTAT and GV$SQL

Tom Kyte - Wed, 2018-06-20 19:46
Hi, I was trying to create a dashboard comparing historical executions and current executions of multiple SQL statements. I have noticed some differences between stats in GV$SQL and DBA_HIST_SQLSTAT. Could you please help us to understand below po...
Categories: DBA Blogs

Need help in formulating query to fetch previous quote times

Tom Kyte - Wed, 2018-06-20 19:46
Hi AskTom Team, I have been a big fan of this site since 1999 around the time it came up. First of all, again a big Thank you for your support to Oracle Community since past two decades. I have immensely benefited from this. This time arou...
Categories: DBA Blogs

Partitioned table cleanup

Tom Kyte - Wed, 2018-06-20 19:46
Hi I have a table that was created for debugging purposes. Every night a jobs kicks off creating a partition of the days inserts on the table based on date. Needless to say have the partitions grown rapidly and have taken up a lot space in the tab...
Categories: DBA Blogs

Implementing Master/Detail in Oracle Visual Builder Cloud Service

Shay Shmeltzer - Wed, 2018-06-20 18:29

This is a quick demo that combines two techniques I showed in previous blogs - filtering lists, and accessing the value of a  selected row in a table. Leveraging these two together it's quite easy to crate a page that has two tables on it - one is the parent and the other is the child, once you select a record in the parent the child table will update to see only the related child records.

Here is a quick demo:

The two steps we are doing are:

  • Create an action flow on the change of first-selected-row attribute of the table
  • In the flow use the assign variable function to set the filterCriterion of the child table to check for the value selected in the master

As you can see - quite simple.

 

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator