Rittman Mead Consulting

Subscribe to Rittman Mead Consulting feed Rittman Mead Consulting
Rittman Mead consults, trains, and innovates within the world of Oracle Business Intelligence, data integration, and analytics.
Updated: 20 hours 4 min ago

Oracle Analytics Cloud (OAC) training with Rittman Mead

Tue, 2019-03-19 11:36

Rittman Mead have today launched it's new Oracle Analytics Cloud (OAC) Bootcamp. Run on OAC, the course lasts four days and covers everything you need to know in order to manage your Cloud BI platform and assumes no prior knowledge up-front.

As the course is modular, you are able to choose which days you'd like to attend. Day 1 covers an OAC overview, provisioning, systems management, integration and security. Day 2 covers RPD Modelling and Data Modeller. Day 3 is devoted to creating reports, dashboards, alerts and navigation. Day 4 covers content creation using Oracle Data Visualization.

Book here: https://www.rittmanmead.com/training-schedule/

Got a team to train? You can also have our OAC Bootcamp delivered on-site at your location. For more information and prices contact training@rittmanmead.com

Categories: BI & Warehousing

Spark Streaming and Kafka, Part 3 - Analysing Data in Scala and Spark

Tue, 2019-03-12 10:50

In my first two blog posts of the Spark Streaming and Kafka series - Part 1 - Creating a New Kafka Connector and Part 2 - Configuring a Kafka Connector - I showed how to create a new custom Kafka Connector and how to set it up on a Kafka server. Now it is time to deliver on the promise to analyse Kafka data with Spark Streaming.

When working with Apache Spark, you can choose between one of these programming languages: Scala, Java or Python. (There is also support for Spark querying in R.) Python is admittedly the most popular, largely thanks to Python being the most popular (and easiest to learn) programming language from the selection above. Python's PySpark library is catching up with the Spark features available in Scala, but the fact that Python relies on dynamic typing, poses challenges with Spark integration and in my opinion makes Spark a less natural fit with Python than with Scala.

Spark and Scala - the Basics

Spark was developed in Scala and its look and feel resembles its mother language quite closely. In fact, before diving into Spark Streaming, I am tempted to illustrate that for you with a small example (that also nicely recaptures the basics of Spark usage):

import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession

object SparkTellDifference extends App {

    // set up Spark Context
    val sparkSession = SparkSession.builder.appName("Simple Application").config("spark.master", "local[*]").getOrCreate()
    val sparkContext = sparkSession.sparkContext
    sparkContext.setLogLevel("ERROR")

    // step 0: establish source data sets
    val stringsToAnalyse: List[String] = List("Can you tell the difference between Scala & Spark?", "You will have to look really closely!")
    val stringsToAnalyseRdd: RDD[String] = sparkContext.parallelize(stringsToAnalyse)

    // step 1: split sentences into words
    val wordsList: List[String]   = stringsToAnalyse    flatMap (_.split(" "))
    val wordsListRdd: RDD[String] = stringsToAnalyseRdd flatMap (_.split(" "))

    // step 2: convert words to lists of chars, create (key,value) pairs.
    val lettersList: List[(Char,Int)]   = wordsList    flatMap (_.toList) map ((_,1))
    val lettersListRdd: RDD[(Char,Int)] = wordsListRdd flatMap (_.toList) map ((_,1))

    // step 3: count letters
    val lettersCount: List[(Char, Int)] = lettersList groupBy(_._1) mapValues(_.size) toList
    val lettersCountRdd: RDD[(Char, Int)] = lettersListRdd reduceByKey(_ + _)

    // step 4: get Top 5 letters in our sentences.
    val lettersCountTop5: List[(Char, Int)] = lettersCount sortBy(- _._2) take(5)
    val lettersCountTop5FromRdd: List[(Char, Int)] = lettersCountRdd sortBy(_._2, ascending = false) take(5) toList

    // the results
    println(s"Top 5 letters by Scala native: ${lettersCountTop5}")
    println(s"Top 5 letters by Spark: ${lettersCountTop5FromRdd}")

    // we are done
    sparkSession.stop()
}

The code starts by setting up a Spark Session and Context. Please note that Spark is being used in local mode - I do not have Spark nodes installed in my working environment. With Spark Context set up, step 0 is to establish data sources. Note that the Spark RDD is based on the Scala native List[String] value, which we parallelize. Once parallelized, it becomes a Spark native.

Step 1 splits sentences into words - much like we have seen in the typical Spark word count examples. Step 2 splits those word strings into Char lists - instead of words, let us count letters and see which letters are used the most in the given sentences. Note that Steps 1 and 2 look exactly the same whilst the first one is Scala native whereas the second works with a Spark RDD value. Step 2 ends with us creating the familiar (key,value) pairs that are typically used in Spark RDDs.

Step 3 shows a difference between the two - Spark's reduceByKey has no native Scala analogue, but we can replicate its behaviour with the groupBy and mapValues functions.

In step 4 we sort the data sets descending and take top 5 results. Note minor differences in the sortBy functions.

As you can see, Spark looks very Scala-like and you may have to look closely and check data types to determine if you are dealing with Scala native or remote Spark data types.

The Spark values follow the typical cycle of applying several transformations that transform one RDD into another RDD and in the end the take(5) action is applied, which pulls the results  from the Spark RDD into a local, native Scala value.

Introducing Spark Streaming

A good guide on Spark Streaming can be found here.

A quick overview of Spark Streaming with Kafka can be found here, though it alone will unlikely be sufficient to understand the Spark Streaming context - you will need to read the Spark Streaming guide as well.

Working with Spark streams is mostly similar to working with regular RDDs. Just like the RDDs, on which you apply transformations to get other immutable RDDs and then apply actions to get the data locally, Spark Streams work similarly. In fact, the transformation part looks exactly the same - you apply a transformation on a Discretized Stream (DStream) to get another DStream. For example, you can have a val words: DStream[String] that represents a stream of words. You can define another DStream with those same words in upper case as

val wordsUpper: DStream[String] = words map (_.toUpperCase)

Note that both these values represent streams - data sources where new data production might be ongoing. So if you have an incoming stream of words, you can define another data stream of the same words but in upper case. That includes the words not yet produced into the stream.

Source: https://spark.apache.org/docs/latest/streaming-programming-guide.html

(If the values words were an RDD, the wordsUpper calculation would look almost the same: val wordsUpper: RDD[String] = words map (_.toUpperCase).) However, DStreams and RDDs differ when it comes to getting the data locally - for RDDs you call actions, for DStreams it is a bit more complicated. But... let us start from the beginning.

Setting up Spark Streaming

Much like a Spark Session and Context, Spark Streaming needs to be initialised.

We start by defining Spark Config - much like for SparkSession in the simple Spark example, we specify the application name and define the nodes we are going to use - in our case - local nodes on my developer workstation. (The asterisk means that Spark can utilise all my CPU threads.)

val sparkConfig = 
  new SparkConf().setMaster("local[*]").setAppName("SparkKafkaStreamTest")

The next step is creating a Spark StreamingContext. We pass in the config defined above but also specify the Spark Streaming batch interval - 1 minute. This is the same as the production interval by our Connector set up in Kafka. But we could also define a 5 minute batch interval and get 5 records in every batch.

val sparkStreamingContext = new StreamingContext(sparkConfig, Minutes(1))

Before we proceed, we would like to disable the annoying INFO messages that Spark likes to flood us with. Spark log level is set in Spark Context but we do not have SparkContext defined, do we? We only have StreamingContext. Well, actually, upon the creation of a StreamingContext, SparkContext is created as well. And we can access it via the StreamingContext value:

sparkStreamingContext.sparkContext.setLogLevel("ERROR")

That is the Spark Streaming Context dealt with.

Setting up Access to Kafka

Setting up access to Kafka is equally straightforward. We start by configuring Kafka consumer:

val kafkaConfig = Map[String, Object](
    "client.dns.lookup" -> "resolve_canonical_bootstrap_servers_only",
    "bootstrap.servers" -> "192.168.1.13:9092",
    "key.deserializer" -> classOf[StringDeserializer],
    "value.deserializer" -> classOf[StringDeserializer],
    "group.id" -> "kafkaSparkTestGroup",
    "auto.offset.reset" -> "latest",
    "enable.auto.commit" -> (false: java.lang.Boolean)
)

The parameters given here in a Scala Map are Kafka Consumer configuration parameters as described in Kafka documentation. 192.168.1.13 is the IP of my Kafka Ubuntu VM.

Although I am referring to my Kafka server by IP address, I had to add an entry to the hosts file with my Kafka server name for my connection to work:

192.168.1.13 kafka-box

The client.dns.lookup value did not have an impact on that.

The next step is specifying an array of Kafka topics - in our case that is only one topic - 'JanisTest':

val kafkaTopics = Array("JanisTest")
Getting First Data From Kafka

We are ready to initialise our Kafka stream in Spark. We pass our StreamingContext value, topics list and Kafka Config value to the createDirectStream function. We also specify our LocationStrategy value - as described here. Consumer Strategies are described here.

val kafkaRawStream: InputDStream[ConsumerRecord[String, String]] =
        KafkaUtils.createDirectStream[String, String](
            sparkStreamingContext,
            LocationStrategies.PreferConsistent,
            ConsumerStrategies.Subscribe[String, String](kafkaTopics, kafkaConfig)
        )

What gets returned is a Spark Stream coming from Kafka. Please note that it returns Kafka Consumer record (key,value) pairs. The value part contains our weather data in JSON format. Before we proceed with any sort of data analysis, let us parse the JSON in a similar manner we did JSON parsing in the Part 1 of this blog post. I will not cover it here but I have created a Gist that you can have a look at.

The weatherParser function converts the JSON to a WeatherSchema case class instance - the returned value is of type DStream[WeatherSchema], where DStream is the Spark Streaming container:

val weatherStream: DStream[WeatherSchema] = 
   kafkaRawStream map (streamRawRecord => weatherParser(streamRawRecord.value))

Now our data is available for nice and easy analysis.

Let us start with the simplest - check the number of records in our stream:

val recordsCount: DStream[Long] = weatherStream.count()

The above statement deserves special attention. If you have worked with Spark RDDs, you will remember that the RDD count() function returns a Long value instead of an RDD, i.e. it is an action, not a transformation. As you can see above, count() on a DStream returns another DStream, instead of a native Scala long value. It makes sense because a stream is an on-going data producer. What the DStream count() gives us is not the total number of records ever produced by the stream - it is the number of records in the current 1 minute batch. Normally it should be 1 but it can also be empty. Should you take my word for it? Better check it yourself! But how? Certainly not by just printing the recordsCount value - all you will get is a reference to the Spark stream and not the stream content.

Displaying Stream Data

Displaying stream data looks rather odd. To display the recordsCount content, you need the following lines of code:

recordsCount.print()

...

sparkStreamingContext.start() // start the computation
sparkStreamingContext.awaitTermination() // await termination

The DStream value itself has a method print(), which is different from the Scala's print() or println() functions. However, for it to actually start printing stream content, you need to start() stream content computation, which will start ongoing stream processing until terminated. The awaitTermination() function waits for the process to be terminated - typically with a Ctrl+C. There are other methods of termination as well, not covered here. So, what you will get is recordsCount stream content printed every batch interval (1 minute in our example) until the program execution is terminated.

The output will look something like this, with a new record appearing every minute:

-------------------------------------------
Time: 1552067040000 ms
-------------------------------------------
1

-------------------------------------------
Time: 1552067100000 ms
-------------------------------------------
0

-------------------------------------------
Time: 1552067160000 ms
-------------------------------------------
1

Notice the '...' between the recordsCount.print() and the stream start(). You can have DStream transformations following the recordsCount.print() statement and other DStream print() calls before the stream is started. Then, instead of just the count, you will get other values printed for each 1 minute batch.

You can do more than just print the DStream content on the console, but we will come to that a bit later.

Analysing Stream Data

Above we have covered all the basics -  we have initialised Spark Context and Kafka access, we have retrieved stream data and know how how to set up ongoing print of the results for our Stream batches. Before we proceed with our exploration, let us define a goal for our data analysis.

We are receiving a real-time stream of weather data. What we could analyse is the temperature change dynamics within the past 60 minutes. Note that we are receiving a new batch every minute so every minute our 60 minute window will move one step forward.

Source: https://spark.apache.org/docs/latest/streaming-programming-guide.html

What we have got is our weatherStream DStream value. First let us define a Stream window of 60 minutes (check Spark documentation for explanation on how Stream Windows work.)

val weatherStream1Hour: DStream[WeatherSchema] = weatherStream.window(Minutes(60))

The WeatherSchema case class contains many values. But all we need for our simple analysis is really just the timestamp and temperature. Let us extract just the data we need and put it in a traditional RDD (key,value) pair. And we print the result to verify it.

val weatherTemps1Hour: DStream[(String, Double)] = 
   weatherStream1Hour map (weatherRecord => 
     (weatherRecord.dateTime, weatherRecord.mainTemperature) 
   )

weatherTemps1Hour.print()

Please note that the above code should come before the sparkStreamingContext.start()call.

The result we are getting looks something like this:

-------------------------------------------
Time: 1552068480000 ms
-------------------------------------------
(08/03/2019 16:57:27,8.42)
(08/03/2019 16:57:27,8.42)
(08/03/2019 17:06:02,8.38)
(08/03/2019 17:06:02,8.38)
(08/03/2019 17:06:02,8.38)
(08/03/2019 17:06:02,8.38)
(08/03/2019 17:06:02,8.38)
(08/03/2019 17:06:02,8.38)
(08/03/2019 17:06:02,8.38)
(08/03/2019 17:06:02,8.38)
...

Notice the ellipse at the end. Not all records get displayed if there are more than 10.

Of course, we will get a new result printed every minute. However, the latest results will be at the bottom, which means they will be hidden if there are more than 10 of them. Also note that the weather data we are getting is actually not refreshed every minute but more like every 10 minutes. Our 1 minute batch frequency does not represent the actual frequency of weather data updates. But let us deal with those problems one at a time.

For me, vanity always comes first. Let me convert the (key,value) pair output to a nice looking narrative.

val weatherTemps1HourNarrative = weatherTemps1Hour map { 
  case(dateTime, temperature) => 
    s"Weather temperature at ${dateTime} was ${temperature}" 
}

weatherTemps1HourNarrative.print()

The result:

-------------------------------------------
Time: 1552068480000 ms
-------------------------------------------
Weather temperature at 08/03/2019 16:57:27 was 8.42
Weather temperature at 08/03/2019 16:57:27 was 8.42
Weather temperature at 08/03/2019 17:06:02 was 8.38
Weather temperature at 08/03/2019 17:06:02 was 8.38
Weather temperature at 08/03/2019 17:06:02 was 8.38
Weather temperature at 08/03/2019 17:06:02 was 8.38
Weather temperature at 08/03/2019 17:06:02 was 8.38
Weather temperature at 08/03/2019 17:06:02 was 8.38
Weather temperature at 08/03/2019 17:06:02 was 8.38
Weather temperature at 08/03/2019 17:06:02 was 8.38
...

We are still limited to the max 10 records the DStream print() function gives us. Also, unless we are debugging, we are almost certainly going to go further than just printing the records on console. For that we use the DStream foreachRDD function, which works similar to the map function, but does not return any data. Instead, whatever we do with the Stream data - print it to console, save it into a CSV file or database - that needs to take place within the foreachRDD function.

The foreachRDD Function

The foreachRDD function accepts a function as a parameter, which receives as its input a current RDD value representing the current content of the DStream and deals with that content in the function's body.

Ok, at long last we are getting some results back from our Spark stream that we can use, that we can analyse, that we know how to deal with! Let us get carried away!

weatherTemps1Hour foreachRDD { currentRdd =>
  println(s"RDD content:\n\t${currentRdd.collect().map{case(dateTime,temperature) => s"Weather temperature at ${dateTime} was ${temperature}"}.mkString("\n\t")}")

  val tempRdd: RDD[Double] = currentRdd.map(_._2)

  val minTemp = if(tempRdd.isEmpty()) None else Some(tempRdd.min())
  val maxTemp = if(tempRdd.isEmpty()) None else Some(tempRdd.max())

  println(s"Min temperature: ${if(minTemp.isDefined) minTemp.get.toString else "not defined"}")

  println(s"Max temperature: ${if(maxTemp.isDefined) maxTemp.get.toString else "not defined"}")

  println(s"Temperature difference: ${if(minTemp.isDefined && maxTemp.isDefined) (maxTemp.get - minTemp.get).toString}\n")
}

Here we are formatting the output and getting min and max temperatures within the 60 minute window as well as their difference.  Let us look at the result:

RDD content:
	Weather temperature at 08/03/2019 16:57:27 was 8.42
	Weather temperature at 08/03/2019 16:57:27 was 8.42
	Weather temperature at 08/03/2019 17:06:02 was 8.38
	Weather temperature at 08/03/2019 17:06:02 was 8.38
	Weather temperature at 08/03/2019 17:06:02 was 8.38
	Weather temperature at 08/03/2019 17:06:02 was 8.38
	Weather temperature at 08/03/2019 17:06:02 was 8.38
	Weather temperature at 08/03/2019 17:06:02 was 8.38
	Weather temperature at 08/03/2019 17:06:02 was 8.38
	Weather temperature at 08/03/2019 17:06:02 was 8.38
	Weather temperature at 08/03/2019 17:06:02 was 8.38
	Weather temperature at 08/03/2019 17:06:02 was 8.38
	Weather temperature at 08/03/2019 17:06:02 was 8.38
	Weather temperature at 08/03/2019 17:17:32 was 8.28
	Weather temperature at 08/03/2019 17:17:32 was 8.28
	Weather temperature at 08/03/2019 17:17:32 was 8.28
	Weather temperature at 08/03/2019 17:17:32 was 8.28
	Weather temperature at 08/03/2019 17:17:32 was 8.28
	Weather temperature at 08/03/2019 17:17:32 was 8.28
	Weather temperature at 08/03/2019 17:17:32 was 8.28
	Weather temperature at 08/03/2019 17:17:32 was 8.28
	Weather temperature at 08/03/2019 17:17:32 was 8.28
	Weather temperature at 08/03/2019 17:17:32 was 8.28
	Weather temperature at 08/03/2019 17:30:52 was 8.28
	Weather temperature at 08/03/2019 17:30:52 was 8.28
	Weather temperature at 08/03/2019 17:30:52 was 8.28
	Weather temperature at 08/03/2019 17:30:52 was 8.28
	Weather temperature at 08/03/2019 17:30:52 was 8.28
	Weather temperature at 08/03/2019 17:30:52 was 8.28
	Weather temperature at 08/03/2019 17:30:52 was 8.28
	Weather temperature at 08/03/2019 17:30:52 was 8.28
	Weather temperature at 08/03/2019 17:30:52 was 8.28
	Weather temperature at 08/03/2019 17:30:52 was 8.28
	Weather temperature at 08/03/2019 17:30:52 was 8.28
	Weather temperature at 08/03/2019 17:37:51 was 8.19
	Weather temperature at 08/03/2019 17:37:51 was 8.19
	Weather temperature at 08/03/2019 17:37:51 was 8.19
	Weather temperature at 08/03/2019 17:37:51 was 8.19
	Weather temperature at 08/03/2019 17:37:51 was 8.19
	Weather temperature at 08/03/2019 17:37:51 was 8.19
	Weather temperature at 08/03/2019 17:37:51 was 8.19
	Weather temperature at 08/03/2019 17:37:51 was 8.19
	Weather temperature at 08/03/2019 17:37:51 was 8.19
	Weather temperature at 08/03/2019 17:37:51 was 8.19
	Weather temperature at 08/03/2019 17:37:51 was 8.19
	Weather temperature at 08/03/2019 17:51:10 was 8.1
	Weather temperature at 08/03/2019 17:51:10 was 8.1
	Weather temperature at 08/03/2019 17:51:10 was 8.1
	Weather temperature at 08/03/2019 17:51:10 was 8.1
	Weather temperature at 08/03/2019 17:51:10 was 8.1
	Weather temperature at 08/03/2019 17:51:10 was 8.1
	Weather temperature at 08/03/2019 17:51:10 was 8.1
	Weather temperature at 08/03/2019 17:51:10 was 8.1
	Weather temperature at 08/03/2019 17:51:10 was 8.1
	Weather temperature at 08/03/2019 17:51:10 was 8.1
	Weather temperature at 08/03/2019 18:01:10 was 7.99
	Weather temperature at 08/03/2019 18:01:10 was 7.99
	Weather temperature at 08/03/2019 18:01:10 was 7.99
Min temperature: 7.99
Max temperature: 8.42
Temperature difference: 0.4299999999999997

(Now there is no Time: 1552068480000 ms signature in our results printout because we are no longer using the DStream print() function).

However, I would like my analysis to be more detailed. It is time to involve Spark DataFrames.

Kafka Stream Data Analysis with Spark DataFrames

Just like in the previous statement, I need to extract Stream data with the currentRDD function. In fact, all the code that follows will be within the currentRDD function block:

weatherStream1Hour foreachRDD { currentRdd => {
... // the following code comes here
}

First, let us create a DataFrame from an RDD. Spark RDDs and DataFrames are two quite different representations of distributed data. And yet - look how simply the conversion works:

val spark =
   SparkSession.builder.config(currentRdd.sparkContext.getConf).getOrCreate()
import spark.implicits._

val simpleDF: DataFrame = currentRdd.toDF()
simpleDF.createOrReplaceTempView("simpleDF")

This trick works because our weatherStream1Hour DStream and consequently the currentRdd value that represents the Stream content, are based on the WeatherSchema case class. (data types - weatherStream1Hour: DStream[WeatherSchema] and currentRdd: RDD[WeatherSchema].) Therefore the currentRdd.toDF() implicit conversion works - Spark understands Scala case classes.

Once we have the DataFrame created, we can create a Temp view so we can query this DF with Spark SQL - that is what the createOrReplaceTempView function is for.

Let us start with the simplest queries - query the count(*) and the full content of the DataFrame:

val countDF: DataFrame = spark.sql("select count(*) as total from simpleDF")
countDF.show()

val selectAllDF = spark.sql("select * from simpleDF")
selectAllDF.show()

The result:

DataFrame's show() function by default only shows 20 rows, but we can always adjust that to show more or less. However, as we had established earlier in our analysis, the weather data actually does not get updated every minute - we are getting lots of duplicate records that we could get rid of. It is easy with SQL's distinct:

val distinctDF = spark.sql("select distinct * from simpleDF")
distinctDF.show()

The result - only 7 distinct weather measurements, confirming our suspicion that we are only getting a weather update approximately every 10 minutes:

Let us go back to our analysis - temperature change dynamics within the past 60 minutes. The temperature value in our DataFrame is named 'mainTemperature'. But where is the timestamp? We did have the dateTime value in our RDD. Why is it missing from the DataFrame? The answer is - because dateTime is actually a function. In RDD, when we referenced it, we did not care if it is a value or a function call. Now, when dealing with DataFrames, it becomes relevant.

As can be seen in the Gist, dateTime is a function, in fact it is a WeatherSchema case class method and is calculated from the dt value, which represents time in Unix format. The function that performs the actual conversion - dateTimeFromUnix - is defined in the WeatherParser object in the same Gist. If we want to get the save dateTime value in our DataFrame, we will have to register a Spark User Defined Function (UDF).

Creating a Spark User Defined Function (UDF)

Fortunately, creating UDFs is no rocket science - we do that with the Spark udf function. However, to use this function in a Spark SQL query, we need to register it first - associate a String function name with the function itself.

val dateTimeFromSeconds: Double => String = WeatherParser.dateTimeFromUnix(_)
val dateTimeFromSecondsUdf = udf(dateTimeFromSeconds)
spark.udf.register("dateTimeFromSecondsUdf", dateTimeFromSecondsUdf) // to register for SQL use

Now let us query the temperature and time:

val tempTimeDF = spark.sql(
   "select distinct 
      dt timeKey, 
      dateTimeFromSecondsUdf(dt) temperatureTakenTime, 
      mainTemperature temperatureCelsius 
    from simpleDF order by timeKey"
)

tempTimeDF.show()
tempTimeDF.createOrReplaceTempView("tempTime")

We show the results but also register the resulting DataFrame as a Temp View so we can from now on reference it in Spark SQL.

Note that we are converting the dt value to a String timestamp value but also keeping the original dt value - because dt is a number that can be sorted chronologically whereas the String timestamp cannot.

The result looks like this:

More Kafka Stream Data Analysis with Spark DataFrames

Now we have the times and the temperatures. But we want to see how temperatures changed between measurements. For example, between the two consecutive measurements at 17:06 and 17:17 the temperature (in London) dropped from 8.38 to 8.28 degrees Celsius. We want to have that value of minus 0.1 degrees in our result set.

If we were using Oracle database, the obvious choice would be the LAG analytic function. Do we have an analogue for that in Spark? Yes, we do! However, this time, instead of using Spark SQL, we will use the withColumn DataFrame function to define the LAG value:

val lagWindow = org.apache.spark.sql.expressions.Window.orderBy("timeKey")
val lagDF = tempTimeDF
  .withColumn("temperatureCelsiusPrev", lag("temperatureCelsius", 1, 0).over(lagWindow))
  .withColumn("rownum", monotonically_increasing_id())

lagDF.show()
lagDF.createOrReplaceTempView("tempTimeWithPrev")

The result:

Here we are actually adding two values - lag and rownum, the latter being an analogue to the Oracle SQL ROW_NUMBER analytic function.

Note the inputs for the Spark lag function: The first is the source column name, the second is the lag offset and the third is default value - 0. The default value in our case will mean zero degrees Celsius, which will mess up our temperature delta for the first temperature measurement. Fortunately, Spark SQL also supports the CASE function so we can deal with this challenge with ease. In addition, let us round the result to get rid of the floating point artefacts.

val tempDifferenceDF = spark.sql(
	"select 
		temperatureTakenTime, 
		temperatureCelsius, 
		temperatureCelsius - temperatureCelsiusPrev temperatureDiff, 
		ROUND(CASE 
			WHEN (rownum = 0) 
			THEN 0 
			ELSE temperatureCelsius - temperatureCelsiusPrev 
		END, 2) AS temperatureDiffAdjusted 
	from tempTimeWithPrev")

tempDifferenceDF.show()

And the result:

Conclusion

Kafka stream data analysis with Spark Streaming works and is easy to set up, easy to get it working. In this 3-part blog, by far the most challenging part was creating a custom Kafka connector. Once the Connector was created, setting it up and then getting the data source working in Spark was smooth sailing.

One thing to keep in mind when working with streams is - they are different from RDDs, which are static, immutable data sources. Not so with DStreams, which by their nature are changing, dynamic.

The challenging bit in the code is the

sparkStreamingContext.start() // start the computation
sparkStreamingContext.awaitTermination() // await termination

code block and its interaction with the foreachRDD function  - to somebody not familiar with how Spark Streaming works, the code can be hard to understand.

The ease of creating a DataFrame from the original RDD was a pleasant surprise.

So, is using Spark and Kafka with Scala a good idea? Definitely yes. It works like a charm. However, in real life, additional considerations like the availability and cost of Python vs Scala developers as well as your existing code base will come into play. I hate real life.

Categories: BI & Warehousing

Rittman Mead at Analytics and Data Summit 2019

Tue, 2019-03-05 09:49
Rittman Mead at Analytics and Data Summit 2019

The Analytics and Data Summit 2019 (formerly known as BIWA) is happening next week in Oracle HQ in Redwood Shores. I'm excited to participate since it's one of the best conferences for the Oracle Analytics crowd where you can get three days full of content from experts as well as hints on products future developments directly from the related Product Managers!

I'll be representing Rittman Mead in two sessions:

This two-hour workshop will cover all the details of OAC: Product Overview, Instance Creation and Management, Moving from on-prem OBIEE to OAC, Data Preparation and Data Visualization, Advanced Analytics.  With interactive labs where participants can experience Data Visualization and Data Flows.

Rittman Mead at Analytics and Data Summit 2019

Become a Data Scientist with OAC! This session will explain how Oracle Analytics Cloud acts as an  enabler for the transformation from a Data Analyst to a Data Scientist. Connection to the Data, Cleaning, Transformation, and Analysis will be the intermediate steps before training of several machine learning models which then will be evaluated and used to predict outcomes on unseen data. With a demo showing all the steps in a real example based on a wine dataset!

Rittman Mead at Analytics and Data Summit 2019

There is a full list of all sessions here. You can follow the conference on twitter with the hashtag #AnDSummit2019, and I'll be tweeting about it too as @ftisiot.

The presentations that I'm delivering will be available to download on speakerdeck.

Categories: BI & Warehousing

Spark Streaming and Kafka - Configuring a Kafka Connector

Wed, 2019-02-27 05:14

In my previous blog post, I covered the development of a custom Kafka Source Connector, written in Scala. This blog post is about deploying that Connector. We are getting closer to analysing the stream data in Spark - as promised in the title.

If you are installing a Connector from Confluent Hub, instead of installing a custom one, please follow this guide instead.

Setting up Kafka Server

This blog post does not cover setting up a Kafka instance. However, let me give a quick overview of the environment I am using:

  • OS: Ubuntu server 18.10.
  • Kafka version - at the time of writing, the latest version is 2.1.0. So that is the one I am using. Please note that the Kafka Connector API interface is slightly different for different Kafka versions. (In fact, early in my Connector development I saw Kafka throwing strange missing function errors. It turned out I was using a slightly older API for Kafka 2.0.0 instead of the one for 2.1.0. - the SourceConnector class's function `taskConfigs` had been renamed in the latest version.)
  • Java: OpenJDK 11.0.1. It seems the OpenJDK is preferred for Kafka over the Oracle Java JDK.

I got the Kafka install file from:
http://www-eu.apache.org/dist/kafka/2.1.0/kafka_2.12-2.1.0.tgz

Setting up Kafka Topic

Confluent have an excellent user guide on how to set up Connectors. There is also a Quick Start guide in the Apache Kafka website, though it is much less detailed than the guide from Confluent.

The first step for me, once my Kafka instance is up and running, is to create a new topic for my Weather records. Let me call it 'JanisTest'. From Kafka root folder I execute:

$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic JanisTest
$ bin/kafka-topics.sh --list --zookeeper localhost:2181
JanisTest
__consumer_offsets
$

After creating it, I run the --list command to see that it is indeed created and available.

Install Connector

Copy jars from your Scala development environment to your Kafka instance. I chose to have a folder of jars, not one uber-jar with all library dependencies packaged in. But either approach should work fine.

$ cd ~/kafka
$ mkdir connectors
$ cd connectors
$ cp /VMSharedFolder/connectorJars/* .

Do not add the above folder to CLASSPATH! Instead, add it to the
connect-*.properties config files. In the later versions of Kafka, the server itself manages paths to plugins.

$ cd ~/kafka/config
$ vi connect-standalone.properties

We will run our connector in standalone mode, therefore the important file for us to edit is 'connect-standalone.properties'. But we can also edit
the 'connect-distributed.properties' config file if we ever decide to run Kafka on multiple nodes.

Add the following line to the connect-standalone.properties and connect-distributed.properties configuration files:

plugin.path=/home/kafka/kafka/connectors

If you need to add several connector folders, the plugin.path value is comma-separated.

Configure Connector

Similar to the connect-standalone.properties file, our newly installed Source Connector also requires a configuration file. Let us create one:

$ cd ~/kafka/config
$ vi connect-http-source.properties

We specify the following configuration attributes in the config file - these attributes were defined in the `HttpSourceConnectorConstants` object we discussed in my previous blog post.

name=http-source

http.url=http://api.openweathermap.org/data/2.5/weather
http.api.key=<YOUR API KEY>
http.api.params=units=metric&q=London,uk

service.name=CurrentWeatherData
topic=JanisTest

poll.interval.ms=60000
tasks.max=1
connector.class=com.rittmanmead.kafka.connector.http.HttpSourceConnector
  • name - name of the Connector.
  • http.url - the base URL for the HTTP API.
  • http.api.key - replace <YOUR API KEY> with the key you got when registering with OpenWeatherMap.
  • http.api.params - any additional parameters you would like to specify for your API request - as described in the API documentation. Follow the format name=value. In case you need to specify several attributes, separate them with a &.
  • service.name - you can name your service.
  • topic - give the topic name that was created previously, as described in this guide.
  • poll.interval.ms - the frequency at which the API will be queried. 60000ms = 1 minute. Because the weather does not change that quickly, this polling interval is frequent enough for us. The free API tier limits us to no more than 1 query per second.
  • tasks.max - the maximum number of concurrent tasks allowed. In our case it is 1.
  • connector.class - the full class path of the SourceConnector class we have implemented, which is described in my previous blog post.
Run Connector

Because we are only running the Connector in standalone mode, there is not much config to be done. We are ready to run it. To start our Source Connector, we run the connect-standalone.sh command:

$ cd ~/kafka
$ ./bin/connect-standalone.sh ./config/connect-standalone.properties ./config/connect-http-source.properties

The first argument passed to the connect-standalone.sh script is the worker properties configuration - only one configuration. It is followed by one or many (on our case - one) connector configuration.

Take a deep breath and hit Enter.

A long log output will follow. If you get any errors, the command will return to shell prompt. Some possible errors are missing classes - if that is the case, please check that you have successfully deployed all the jars and they are registered in the connect-standalone.properties file. Also, if any of the required configuration parameters are missing in the connect-http-source.properties file, you will get an error message.

If successful, the log listing will end with something like this:

[2019-02-19 14:14:55,294] INFO Kafka version : 2.1.0 (org.apache.kafka.common.utils.AppInfoParser:109)
[2019-02-19 14:14:55,297] INFO Kafka commitId : 809be928f1ae004e (org.apache.kafka.common.utils.AppInfoParser:110)
[2019-02-19 14:14:55,334] INFO Created connector http-source (org.apache.kafka.connect.cli.ConnectStandalone:104)
[2019-02-19 14:14:55,406] INFO Setting up an HTTP service for http://api.openweathermap.org/data/2.5/weather... (com.rittmanmead.kafka.connector.http.HttpSourceTask:40)
[2019-02-19 14:14:55,466] INFO Starting to fetch from http://api.openweathermap.org/data/2.5/weather each 60000ms... (com.rittmanmead.kafka.connector.http.HttpSourceTask:47)
[2019-02-19 14:14:55,469] INFO WorkerSourceTask{id=http-source-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:199)
[2019-02-19 14:14:56,560] INFO Http return code: 200 (com.rittmanmead.kafka.connector.http.HttpSourceTask:31)
[2019-02-19 14:14:56,569] INFO Weather Schema parser: JSON text to be parsed: {"coord":{"lon":-0.13,"lat":51.51},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03d"}],"base":"stations","main":{"temp":10.09,"pressure":1019,"humidity":61,"temp_min":9,"temp_max":11},"visibility":10000,"wind":{"speed":4.6,"deg":250},"clouds":{"all":44},"dt":1550582400,"sys":{"type":1,"id":1414,"message":0.0038,"country":"GB","sunrise":1550559964,"sunset":1550597002},"id":2643743,"name":"London","cod":200} (com.rittmanmead.kafka.connector.http.HttpSourceTask:273)
[2019-02-19 14:14:57,481] INFO JSON parsed class content: WeatherSchema(Coord(-0.13,51.51),List(WeatherAtom(802.0,Clouds,scattered clouds,03d)),stations,Main(10.09,1019.0,61.0,9.0,11.0),10000.0,Wind(4.6,250.0),Clouds(44.0),1.5505824E9,Sys(1.0,1414.0,0.0038,GB,1.550559964E9,1.550597002E9),2643743.0,London,200.0) (com.rittmanmead.kafka.connector.http.HttpSourceTask:283)
[2019-02-19 14:14:57,499] INFO Got 1 results for CurrentWeatherData (com.rittmanmead.kafka.connector.http.HttpSourceTask:75)

Some of the above messages were generated by our Connector directly, like the last line above - 'Got 1 results for CurrentWeatherData'. Others are generated by Kafka itself.

Check Connector Output

Our ultimate goal is to analyse a Kafka Stream with Spark in Scala. However, that will have to wait till my next blog post. For now we can check if the topic is being populated. Please note that our polling interval is 1 minute - the topic will not be flooded with records right away.

$ cd ~/kafka
$ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic JanisTest --from-beginning

If all is going well, you should see Weather records, generated according to our custom-defined schema:

Quite easy, was it not? Certainly setting up a Kafka connector is easier than developing one.

The next step will be analysing this data from Spark. But that will be my next blog post.

Categories: BI & Warehousing

Spark Streaming and Kafka - Creating a New Kafka Connector

Thu, 2019-02-21 08:39
More Kafka and Spark, please!

Hello, world!

Having joined Rittman Mead more than 6 years ago, the time has come for my first blog post. Let me start by standing on the shoulders of blogging giants, revisiting Robin's old blog post Getting Started with Spark Streaming, Python, and Kafka.

The blog post was very popular, touching on the subjects of Big Data and Data Streaming. To put my own twist on it, I decided to:

  • not use Twitter as my data source, because there surely must be other interesting data sources out there,
  • use Scala, my favourite programming language, to see how different the experience is from using Python.
Why Scala?

Scala is admittedly more challenging to master than Python. However, because Scala compiles into Java bytecode, it can be used pretty much anywhere where Java is being used. And Java is being used everywhere. Python is arguably even more widely used than Java, however it remains a dynamically typed scripting language that is easy to write in but can be hard to debug.

Is there a case for using Scala instead of Python for the job? Both Spark and Kafka were written in Scala (and Java), hence they should get on like a house on fire, I thought. Well, we are about to find out.

My data source: OpenWeatherMap

When it comes to finding sample data sources for data analysis, the selection out there is amazing. At the time of this writing, Kaggle offers freely available 14,470 datasets, many of them in easy-to-digest formats like CSV and JSON. However, when it comes to real-time sample data streams, the selection is quite limited. Twitter is usually the go-to choice - easily accessible and well documented. Too bad I decided not to use Twitter as my source.

Another alternative is the Wikipedia Recent changes stream. Although in the stream schema there are a few values that would be interesting to analyse, overall this stream is more boring than it sounds - the text changes themselves are not included.

Fortunately, I came across the OpenWeatherMap real-time weather data website. They have a free API tier, which is limited to 1 request per second, which is quite enough for tracking changes in weather. Their different API schemas return plenty of numeric and textual data, all interesting for analysis. The APIs work in a very standard way - first you apply for an API key. With the key you can query the API with a simple HTTP GET request (Apply for your own API key instead of using the sample one - it is easy.):

This request

https://samples.openweathermap.org/data/2.5/weather?q=London,uk&appid=b6907d289e10d714a6e88b30761fae22

gives the following result:

{
  "coord": {"lon":-0.13,"lat":51.51},
  "weather":[
    {"id":300,"main":"Drizzle","description":"light intensity drizzle","icon":"09d"}
  ],
  "base":"stations",
  "main": {"temp":280.32,"pressure":1012,"humidity":81,"temp_min":279.15,"temp_max":281.15},
  "visibility":10000,
  "wind": {"speed":4.1,"deg":80},
  "clouds": {"all":90},
  "dt":1485789600,
  "sys": {"type":1,"id":5091,"message":0.0103,"country":"GB","sunrise":1485762037,"sunset":1485794875},
  "id":2643743,
  "name":"London",
  "cod":200
}
Getting data into Kafka - considering the options

There are several options for getting your data into a Kafka topic. If the data will be produced by your application, you should use the Kafka Producer Java API. You can also develop Kafka Producers in .Net (usually C#), C, C++, Python, Go. The Java API can be used by any programming language that compiles to Java bytecode, including Scala. Moreover, there are Scala wrappers for the Java API: skafka by Evolution Gaming and Scala Kafka Client by cakesolutions.

OpenWeatherMap is not my application and what I need is integration between its API and Kafka. I could cheat and implement a program that would consume OpenWeatherMap's records and produce records for Kafka. The right way of doing that however is by using Kafka Source connectors, for which there is an API: the Connect API. Unlike the Producers, which can be written in many programming languages, for the Connectors I could only find a Java API. I could not find any nice Scala wrappers for it. On the upside, the Confluent's Connector Developer Guide is excellent, rich in detail though not quite a step-by-step cookbook.

However, before we decide to develop our own Kafka connector, we must check for existing connectors. The first place to go is Confluent Hub. There are quite a few connectors there, complete with installation instructions, ranging from connectors for particular environments like Salesforce, SAP, IRC, Twitter to ones integrating with databases like MS SQL, Cassandra. There is also a connector for HDFS and a generic JDBC connector. Is there one for HTTP integration? Looks like we are in luck: there is one! However, this connector turns out to be a Sink connector.

Ah, yes, I should have mentioned - there are two flavours of Kafka Connectors: the Kafka-inbound are called Source Connectors and the Kafka-outbound are Sink Connectors. And the HTTP connector in Confluent Hub is Sink only.

Googling for Kafka HTTP Source Connectors gives few interesting results. The best I could find was Pegerto's Kafka Connect HTTP Source Connector. Contrary to what the repository name suggests, the implementation is quite domain-specific, for extracting Stock prices from particular web sites and has very little error handling. Searching Scaladex for 'Kafka connector' does yield quite a few results but nothing for http. However, there I found Agoda's nice and simple Source JDBC connector (though for a very old version of Kafka), written in Scala. (Do not use this connector for JDBC sources, instead use the one by Confluent.) I can use this as an example to implement my own.

Creating a custom Kafka Source Connector

The best place to start when implementing your own Source Connector is the Confluent Connector Development Guide. The guide uses JDBC as an example. Our source is a HTTP API so early on we must establish if our data source is partitioned, do we need to manage offsets for it and what is the schema going to look like.

Partitions

Is our data source partitioned? A partition is a division of source records that usually depends on the source medium. For example, if we are reading our data from CSV files, we can consider the different CSV files to be a natural partition of our source data. Another example of partitioning could be database tables. But in both cases the best partitioning approach depends on the data being gathered and its usage. In our case, there is only one API URL and we are only ever requesting current data. If we were to query weather data for different cities, that would be a very good partitioning - by city. Partitioning would allow us to parallelise the Connector data gathering - each partition would be processed by a separate task. To make my life easier, I am going to have only one partition.

Offsets

Offsets are for keeping track of the records already read and the records yet to be read. An example of that is reading the data from a file that is continuously being appended - there can be rows already inserted into a Kafka topic and we do not
want to process them again to avoid duplication. Why would that be a problem? Surely, when going through a source file row by row, we know which row we are looking at. Anything above the current row is processed, anything below - new records. Unfortunately, most of the time it is not as simple as that: first of all Kafka supports concurrency, meaning there can be more than one Task busy processing Source records. Another consideration is resilience - if a Kafka Task process fails,
another process will be started up to continue the job. This can be an important consideration when developing a Kafka Source Connector.

Is it relevant for our HTTP API connector? We are only ever requesting current weather data. If our process fails, we may miss some time periods but we cannot recover then later on. Offset management is not required for our simple connector.

So that is Partitions and Offsets dealt with. Can we make our lives just a bit more difficult? Fortunately, we can. We can create a custom Schema and then parse the source data to populate a Schema-based Structure. But we will come to that later.
First let us establish the Framework for our Source Connector.

Source Connector - the Framework

The starting point for our Source Connector are two Java API classes: SourceConnector and SourceTask. We will put them into separate .scala source files but they are shown here together:

import org.apache.kafka.connect.source.{SourceConnector, SourceTask}

class HttpSourceConnector extends SourceConnector {...}
class HttpSourceTask extends SourceTask {...}

These two classes will be the basis for our Source Connector implementation:

  • HttpSourceConnector represents the Connector process management. Each Connector process will have only one SourceConnector instance.
  • HttpSourceTask represents the Kafka task doing the actual data integration work. There can be one or many Tasks active for an active SourceConnector instance.

We will have some additional classes for config and for HTTP access.
But first let us look at each of the two classes in more detail.

SourceConnector class

SourceConnector is an abstract class that defines an interface that our HttpSourceConnector needs to adhere to. The first function we need to override is config:

  private val configDef: ConfigDef =
      new ConfigDef()
          .define(HttpSourceConnectorConstants.HTTP_URL_CONFIG, Type.STRING, Importance.HIGH, "Web API Access URL")
          .define(HttpSourceConnectorConstants.API_KEY_CONFIG, Type.STRING, Importance.HIGH, "Web API Access Key")
          .define(HttpSourceConnectorConstants.API_PARAMS_CONFIG, Type.STRING, Importance.HIGH, "Web API additional config parameters")
          .define(HttpSourceConnectorConstants.SERVICE_CONFIG, Type.STRING, Importance.HIGH, "Kafka Service name")
          .define(HttpSourceConnectorConstants.TOPIC_CONFIG, Type.STRING, Importance.HIGH, "Kafka Topic name")
          .define(HttpSourceConnectorConstants.POLL_INTERVAL_MS_CONFIG, Type.STRING, Importance.HIGH, "Polling interval in milliseconds")
          .define(HttpSourceConnectorConstants.TASKS_MAX_CONFIG, Type.INT, Importance.HIGH, "Kafka Connector Max Tasks")
          .define(HttpSourceConnectorConstants.CONNECTOR_CLASS, Type.STRING, Importance.HIGH, "Kafka Connector Class Name (full class path)")

  override def config: ConfigDef = configDef

This is validation for all the required configuration parameters. We also provide a description for each configuration parameter, that will be shown in the missing configuration error message.

HttpSourceConnectorConstants is an object where config parameter names are defined - these configuration parameters must be provided in the connector configuration file:

object HttpSourceConnectorConstants {
  val HTTP_URL_CONFIG               = "http.url"
  val API_KEY_CONFIG                = "http.api.key"
  val API_PARAMS_CONFIG             = "http.api.params"
  val SERVICE_CONFIG                = "service.name"
  val TOPIC_CONFIG                  = "topic"
  val TASKS_MAX_CONFIG              = "tasks.max"
  val CONNECTOR_CLASS               = "connector.class"

  val POLL_INTERVAL_MS_CONFIG       = "poll.interval.ms"
  val POLL_INTERVAL_MS_DEFAULT      = "5000"
}

Another simple function to be overridden is taskClass - for the SourceConnector class to know its corresponding SourceTask class.

  override def taskClass(): Class[_ <: SourceTask] = classOf[HttpSourceTask]

The last two functions to be overridden here are start and stop. These are called upon the creation and termination of a SourceConnector instance (not Task instance). JavaMap here is an alias for java.util.Map - a Java Map, which is not to be confused with the native Scala Map - that cannot be used here. (If you are a Python developer, a Map in Java/Scala is similar to the Python dictionary, but strongly typed.) The interface requires Java data structures, but that is fine - we can convert them from one to another. By far the biggest problem here is the assignment of the connectorConfig variable - we cannot have a functional programming friendly immutable value here. The variable is defined at the class level

  private var connectorConfig: HttpSourceConnectorConfig = _

and is set in the start function and then referred to in the taskConfigs function further down. This does not look pretty in Scala. Hopefully somebody will write a Scala wrapper for this interface.

Because there is no logout/shutdown/sign-out required for the HTTP API, the stop function just writes a log message.

  override def start(connectorProperties: JavaMap[String, String]): Unit = {
    Try (new HttpSourceConnectorConfig(connectorProperties.asScala.toMap)) match {
      case Success(cfg) => connectorConfig = cfg
      case Failure(err) => connectorLogger.error(s"Could not start Kafka Source Connector ${this.getClass.getName} due to error in configuration.", new ConnectException(err))
    }
  }

  override def stop(): Unit = {
    connectorLogger.info(s"Stopping Kafka Source Connector ${this.getClass.getName}.")
  }

HttpSourceConnectorConfig is a thin wrapper class for the configuration.

We are almost done here. The last function to be overridden is taskConfigs.
This function is in charge of producing (potentially different) configurations for different Source Tasks. In our case, there is no reason for the Source Task configurations to differ. In fact, our HTTP API will benefit little from parallelism, so, to keep things simple, we can assume the number of tasks always to be 1.

  override def taskConfigs(maxTasks: Int): JavaList[JavaMap[String, String]] = List(connectorConfig.connectorProperties.asJava).asJava

The name of the taskConfigs function was changed in the Kafka version 2.1.0 - please consider that when using this code for older Kafka versions.

Source Task class

In a similar manner to the Source Connector class, we implement the Source Task abstract class. It is only slightly more complex than the Connector class.

Just like for the Connector, there are start and stop functions to be overridden for the Task.

Remember the taskConfigs function from above? This is where task configuration ends up - it is passed to the Task's start function. Also, similarly to the Connector's start function, we parse the connection properties with HttpSourceTaskConfig, which is the same as HttpSourceConnectorConfig - configuration for Connector and Task in our case is the same.

We also set up the Http service that we are going to use in the poll function - we create an instance of the WeatherHttpService class. (Please note that start is executed only once, upon the creation of the task and not every time a record is polled from the data source.)

  override def start(connectorProperties: JavaMap[String, String]): Unit = {
    Try(new HttpSourceTaskConfig(connectorProperties.asScala.toMap)) match {
      case Success(cfg) => taskConfig = cfg
      case Failure(err) => taskLogger.error(s"Could not start Task ${this.getClass.getName} due to error in configuration.", new ConnectException(err))
    }

    val apiHttpUrl: String = taskConfig.getApiHttpUrl
    val apiKey: String = taskConfig.getApiKey
    val apiParams: Map[String, String] = taskConfig.getApiParams

    val pollInterval: Long = taskConfig.getPollInterval

    taskLogger.info(s"Setting up an HTTP service for ${apiHttpUrl}...")
    Try( new WeatherHttpService(taskConfig.getTopic, taskConfig.getService, apiHttpUrl, apiKey, apiParams) ) match {
      case Success(service) =>  sourceService = service
      case Failure(error) =>    taskLogger.error(s"Could not establish an HTTP service to ${apiHttpUrl}")
                                throw error
    }

    taskLogger.info(s"Starting to fetch from ${apiHttpUrl} each ${pollInterval}ms...")
    running = new JavaBoolean(true)
  }

The Task also has the stop function. But, just like for the Connector, it does not do much, because there is no need to sign out from an HTTP API session.

Now let us see how we get the data from our HTTP API - by overriding the poll function.

The fetchRecords function uses the sourceService HTTP service initialised in the start function. sourceService's sourceRecords function requests data from the HTTP API.

  override def poll(): JavaList[SourceRecord] = this.synchronized { if(running.get) fetchRecords else null }

  private def fetchRecords: JavaList[SourceRecord] = {
    taskLogger.debug("Polling new data...")

    val pollInterval = taskConfig.getPollInterval
    val startTime    = System.currentTimeMillis

    val fetchedRecords: Seq[SourceRecord] = Try(sourceService.sourceRecords) match {
      case Success(records)                    => if(records.isEmpty) taskLogger.info(s"No data from ${taskConfig.getService}")
                                                  else taskLogger.info(s"Got ${records.size} results for ${taskConfig.getService}")
                                                  records

      case Failure(error: Throwable)           => taskLogger.error(s"Failed to fetch data for ${taskConfig.getService}: ", error)
                                                  Seq.empty[SourceRecord]
    }

    val endTime     = System.currentTimeMillis
    val elapsedTime = endTime - startTime

    if(elapsedTime < pollInterval) Thread.sleep(pollInterval - elapsedTime)

    fetchedRecords.asJava
  }

Phew - that is the interface implementation done. Now for the fun part...

Requesting data from OpenWeatherMap's API

The fun part is rather straightforward. We use the scalaj.http library to issue a very simple HTTP request and get a response.

Our WeatherHttpService implementation will have two functions:

  • httpServiceResponse that will format the request and get data from the API
  • sourceRecords that will parse the Schema and wrap the result within the Kafka SourceRecord class.

Please note that error handling takes place in the fetchRecords function above.

    override def sourceRecords: Seq[SourceRecord] = {
        val weatherResult: HttpResponse[String] = httpServiceResponse
        logger.info(s"Http return code: ${weatherResult.code}")
        val record: Struct = schemaParser.output(weatherResult.body)

        List(
            new SourceRecord(
                Map(HttpSourceConnectorConstants.SERVICE_CONFIG -> serviceName).asJava, // partition
                Map("offset" -> "n/a").asJava, // offset
                topic,
                schemaParser.schema,
                record
            )
        )
    }

    private def httpServiceResponse: HttpResponse[String] = {

        @tailrec
        def addRequestParam(accu: HttpRequest, paramsToAdd: List[(String, String)]): HttpRequest = paramsToAdd match {
            case (paramKey,paramVal) :: rest => addRequestParam(accu.param(paramKey, paramVal), rest)
            case Nil => accu
        }

        val baseRequest = Http(apiBaseUrl).param("APPID",apiKey)
        val request = addRequestParam(baseRequest, apiParams.toList)

        request.asString
    }
Parsing the Schema

Now the last piece of the puzzle - our Schema parsing class.

The short version of it, which would do just fine, is just 2 lines of class (actually - object) body:

object StringSchemaParser extends KafkaSchemaParser[String, String] {
    override val schema: Schema = Schema.STRING_SCHEMA
    override def output(inputString: String) = inputString
}

Here we say we just want to use the pre-defined STRING_SCHEMA value as our schema definition. And pass inputString straight to the output, without any alteration.

Looks too easy, does it not? Schema parsing could be a big part of Source Connector implementation. Let us implement a proper schema parser. Make sure you read the Confluent Developer Guide first.

Our schema parser will be encapsulated into the WeatherSchemaParser object. KafkaSchemaParser is a trait with two type parameters - inbound and outbound data type. This indicates that the Parser receives data in String format and the result is a Kafka's Struct value.

object WeatherSchemaParser extends KafkaSchemaParser[String, Struct]

The first step is to create a schema value with the SchemaBuilder. Our schema is rather large, therefore I will skip most fields. The field names given are a reflection of the hierarchy structure in the source JSON. What we are aiming for is a flat, table-like structure - a likely Schema creation scenario.

For JSON parsing we will be using the Scala Circle library, which in turn is based on the Scala Cats library. (If you are a Python developer, you will see that Scala JSON parsing is a bit more involved (this might be an understatement), but, on the flipside, you can be sure about the result you are getting out of it.)

    override val schema: Schema = SchemaBuilder.struct().name("weatherSchema")
        .field("coord-lon", Schema.FLOAT64_SCHEMA)
        .field("coord-lat", Schema.FLOAT64_SCHEMA)

        .field("weather-id", Schema.FLOAT64_SCHEMA)
        .field("weather-main", Schema.STRING_SCHEMA)
        .field("weather-description", Schema.STRING_SCHEMA)
        .field("weather-icon", Schema.STRING_SCHEMA)
        
        // ...
        
        .field("rain", Schema.FLOAT64_SCHEMA)
        
        // ...

Next we define case classes, into which we will be parsing the JSON content.

   case class Coord(lon: Double, lat: Double)
   case class WeatherAtom(id: Double, main: String, description: String, icon: String)

That is easy enough. Please note that the case class attribute names match one-to-one with the attribute names in JSON. However, our Weather JSON schema is rather relaxed when it comes to attribute naming. You can have names like type and 3h, both of which are invalid value names in Scala. What do we do? We give the attributes valid Scala names and then implement a decoder:

    case class Rain(threeHours: Double)
    object Rain {
        implicit val decoder: Decoder[Rain] = Decoder.instance { h =>
            for {
                threeHours <- h.get[Double]("3h")
            } yield Rain(
                threeHours
            )
        }
    }

The rain case class is rather short, with only one attribute. The corresponding JSON name was 3h. We map '3h' to the Scala attribute threeHours.

Not quite as simple as JSON parsing in Python, is it?

In the end, we assemble all sub-case classes into the WeatherSchema case class, representing the whole result JSON.

    case class WeatherSchema(
                                coord: Coord,
                                weather: List[WeatherAtom],
                                base: String,
                                mainVal: Main,
                                visibility: Double,
                                wind: Wind,
                                clouds: Clouds,
                                dt: Double,
                                sys: Sys,
                                id: Double,
                                name: String,
                                cod: Double
                            )

Now, the parsing itself. (Drums, please!)

structInput here is the input JSON in String format. WeatherSchema is the case class we created above. The Circle decode function returns a Scala Either monad, error on the Left(), successful parsing result on the Right() - nice and tidy. And safe.

        val weatherParsed: WeatherSchema = decode[WeatherSchema](structInput) match {
            case Left(error) => {
                logger.error(s"JSON parser error: ${error}")
                emptyWeatherSchema
            }
            case Right(weather) => weather
        }

Now that we have the WeatherSchema object, we can construct our Struct object that will become part of the SourceRecord returned by the sourceRecords function in the WeatherHttpService class. That in turn is called from the HttpSourceTask's poll function that is used to populate the Kafka topic.

        val weatherStruct: Struct = new Struct(schema)
            .put("coord-lon", weatherParsed.coord.lon)
            .put("coord-lat", weatherParsed.coord.lat)

            .put("weather-id", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).id)
            .put("weather-main", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).main)
            .put("weather-description", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).description)
            .put("weather-icon", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).icon)

            // ...

Done!

Considering that Schema parsing in our simple example was optional, creating a Kafka Source Connector for us meant creating a Source Connector class, a Source Task class and a Source Service class.

Creating JAR(s)

JAR creation is described in the Confluent's Connector Development Guide. The guide mentions two options - either all the library dependencies can be added to the target JAR file, a.k.a an 'uber-Jar'. Alternatively, the dependencies can be copied to the target folder. In that case they must all reside in the same folder, with no subfolder structure. For no particular reason, I went with the latter option.

The Developer Guide says it is important not to include the Kafka Connect API libraries there. (Instead they should be added to CLASSPATH.) Please note that for the latest Kafka versions it is advised not to add these custom JARs to CLASSPATH. Instead, we will add them to connectors' plugin.path. But that we will leave for another blog post.

Scala - was it worth using it?

Only if you are a big fan. The code I wrote is very Java-like and it might have been better to write it in Java. However, if somebody writes a Scala wrapper for the Connector interfaces, or, even better, if a Kafka Scala API is released, writing Connectors in Scala would be a very good choice.connector

Categories: BI & Warehousing

The Importance of Feature Engineering and Selection

Tue, 2019-02-19 10:27

In machine learning your model is only ever as good as the data you train it on. As such a significant proportion of your effort should be focused on creating a dataset that is optimised to maximise the information density of your data. Feature engineering and selection are the methods used for achieving this goal.

In this context, the definition of a feature will be a column or attribute of the data.

Feature engineering is a broad term that covers a number of manipulations that may be carried out on your dataset. There are therefore many processes that could be considered part of feature engineering. In this post I introduce some of the high-level activities carried out as a part of feature engineering, as well as, some of the most common methods of feature selection, but this is by no means an exhaustive list.

Engineering Features

Feature engineering is the process by which knowledge of data is used to construct explanatory variables, features, that can be used to train a predictive model. Engineering and selecting the correct features for a model will not only significantly improve its predictive power, but will also offer the flexibility to use less complex models that are faster to run and more easily understood.

At the start of every machine learning project the raw data will be inevitably messy and unsuitable for training a model. The first step is always data exploration and cleaning, which involves changing data types and removing or imputing missing values. With an understanding of the data gained through exploration, it can be prepared in such a way that it is useful for the model. This may include removing outliers or specific features you don’t want the model to learn; as well as creating features from the data that better represent the underlying problem, facilitating the machine learning process and resulting in improved model accuracy.

Unprocessed data will likely contain features with the following problems:

Issue Solution Missing values Imputed in data cleaning Does not belong to the same dimension Normalisation/standardisation Information redundancy Filtered out in feature selection Decomposing or Splitting Features

One form of feature engineering is to decompose raw attributes into features that will be easier to interpret patterns from. For example, decomposing dates or timestamp variables into a variety of constituent parts may allow models to discover and exploit relationships. Common time frames for which trends occur include: absolute time, day of the year, day of the week, month, hour of the day, minute of the hour, year, etc. Breaking dates up into new features such as this will help a model better represent structures or seasonality in the data. For example, if you were investigating ice cream sales, and created a “Season of Sale” feature, the model would recognise a peak in the summer season. However, an “Hour of Sale” feature would reveal an entirely different trend, possibly peaking in the middle of each day.

Your data can also be binned into buckets and converted into factors (numerical categories) or flattened into a column per category with flags. Which of these will work best for your data depends on a number of factors including how many categorical values you have, and their frequency. (A similar process can be utilised for natural language processing or textual prediction see bag of words.)

Data Enrichment

Data enrichment is the process of creating new features by introducing data from external sources. Externally collated data is invaluable in prediction success, there is a plethora of publicly accessible datasets that will in most situations create impactful features.

Third party datasets could include attributes that are challenging or costly to collect directly; or are possibly more accurately available online.

It is important when enriching a dataset to consider the relevance of sources, as irrelevant features will unnecessarily complicate the model adding to the noise and increasing the chance of overfitting. For example, when working with dates it is generally insightful to introduce data on national holidays. In the case of our ice cream sales example, you may want to include national holidays, temperature and weather features, as these would be expected to influence sales. However, adding temperature or weather data from another country or other areas will definitely not be relevant and will in the best case have no relation to the data, but in the worst case have a spurious correlation and mislead the model when training.

Feature Transformations

Feature transformations can include aggregating or combining attributes to create new features. Useful and relevant features will depend on the problem at hand but averages, sums and ratios over different groupings can better expose trends to a model.

Multiplying or aggregating features to create new combined features can help with this. Categorical features can be combined into a single feature containing all combination of the two categories. This can easily be overdone and it is necessary to be careful as to not overfit due to misleading combined features.

It is possible to identify higher order interactions via a simple decision tree, the initial branches can be used to identify which features to combine.

A general requirement for some machine learning algorithms is standardisation/normalisation. This rescales the features so they represent a standard normal distribution (centred around 0 with a standard deviation of 1). The benefits of standardisation are that you do not emphasise variables with larger magnitudes and when comparing measurements with different units.

Automated Feature Engineering

Engineering features manually as described above can be very time consuming and requires a good understanding of the underlying data, structures in the data, the problem you are trying to solve and how best to represent the data to have the desired effect. Manual feature engineering is problem specific and cannot be applied to another dataset or problem.

There has been some progress made in the automation of feature engineering. FeatureTools for example is a python framework for transforming datasets into feature matrices. In my opinion there are positives and negatives to such an approach Feature engineering is time-consuming and any automation of this process would be beneficial. However, creating many useless features will lead to overfitting and automatically created features can result in loss of interpretability and understanding.

Feature Selection

Of the features now available in your data set, some will be more influential than others on the model accuracy. Feature selection aims to reduce the dimensionality of the problem by removing redundant or irrelevant features. A feature may be redundant if it is highly correlated with another feature, but does so because it is based on the same underlying information. These types of features can be removed from the data set without any loss of information. In our ice cream example, sales may be correlated with temperature and suncream usage, but the relationship with suncream is a result of this also being correlated with the confounding variable temperature.

Reducing the number of features through feature selection ensures training the model will require less memory and computational power, leading to shorter training times and will also help to reduce the chance of overfitting. Simplification of the training data will also make the model easier to interpret, which can be important when justifying real-world decision making as a result of model outputs.

Feature Selection Methods

Feature selection algorithms rank or score features based on a number of methods so that the least significant features can be removed. In general, the features are chosen from two perspectives; feature divergence and correlations between features and the dependent variable (the value being predicted). Some models have built-in feature selections, that aim to reduce or discount features as part of the model building process, for example LASSO Regression.

Methods that can be used to reduce features include: Correlation

A feature that is strongly correlated with the dependent variable may be important to the model. The correlation coefficients produced are univariate and therefore only correspond to each individual feature’s relationship to the dependent variable, as opposed to combinations of features.

Near Zero Variance

Depending on the problem you are dealing with you may want to remove constant and almost constant features across samples. There are functions that will remove these automatically such as nzv() in R. They can be tuned from removing only features which have a single unique value across all samples or those that have a few unique values across the set, to those with a large ratio of the most common value to the second most common.

Principal component analysis (PCA)

PCA is an unsupervised dimensionality reduction method, its purpose is to find the directions (the so-called principal components) in feature space that maximise the variance in the dataset. You are essentially finding the axes of feature space that are intuitive to the shape of the data, where there is the greatest variation, and therefore the most information. A very simple example would be a 3D feature space of x, y, z. If you look at the data through the x,y axis and all of your points were tightly clustered together this would not be a very good axis to view your data structure though. However, if you viewed it in the x, z plane and your data was spread out, this would be much more useful as you are able to observe a trend in the data. Principal components are dimensions along which your data points are most spread out, but as opposed to the example above, feature space will have n-dimensions not 3, and a principal component can be expressed a single feature or as a combination of many existing features.

Linear discriminant analysis (LDA)

LDA is a supervised dimensionality reduction method, using known class groupings. It achieves a similar goal to PCA, but instead of finding the axes that maximise the variance, it will represent the axes that maximise the separation between multiple classes. These are called linear discriminants.

For multi-class classification, it would be assumed that LDA would achieve better results than PCA, but this is not always the case.

Summary

The features in your data will influence the results that your predictive model can achieve.

Having and engineering good features will allow you to most accurately represent the underlying structure of the data and therefore create the best model.

Features can be engineered by decomposing or splitting features, from external data sources, or aggregating or combining features to create new features.

Feature selection reduces the computation time and resources needed to create models as well as preventing overfitting which would degrade the performance of the model. The flexibility of good features allows less complex models, which would be faster to run and easier to understand, to produce comparable results to the complex ones.

Complex predictive modelling algorithms perform feature importance and selection internally while constructing models. These models can also report on the variable importance determined during the model preparation process. However, this is computationally intensive and by first removing the most obviously unwanted features, a great deal of unnecessary processing can be avoided.

Categories: BI & Warehousing

Oracle OpenWorld Europe : London 2019

Tue, 2019-01-22 07:39

Some eleven thousand people descended on Oracle OpenWorld Europe in London last week for two days of business and technical sessions delivered by a mixture of members of Oracle’s product team and end users giving real-world case studies of adoption of Oracle’s Cloud offerings and product roadmaps.

Screen-Shot-2019-01-22-at-13.16.24

Something that may not surprise anyone is that at OpenWorld, to speak of anything other than Cloud or Autonomous would be somewhat blasphemous.

It’s a shrewd move this by Oracle to branch outside of their flagship annual conference held in Redwood Shores in October and the attendance backed up the rationale that offering free entry was the right thing to do.

Some of the observations that I made after attending were:

The future is Autonomous

Oracle’s Autonomous Database offering is being heavily pushed despite being a relatively immature product with very few real-world examples yet. The concept is certainly valid and it’s worth new and existing customers of Oracle seriously considering trialling.

There are two autonomous offerings. The autonomous data warehouse (ADW) and autonomous transaction processing (ATP).

Both are fully cloud managed by Oracle, are elastic so that they can be scaled up and down on demand, and most importantly - are autonomous. So the marketing spiel goes, they are self driving, self securing, self repairing. You’ll see this a lot but basically it means that the manual tasks that a DBA would normally perform are taken care of by Oracle. Think patching etc…

AI & ML

You can tell that Oracle are really getting behind the latest trends in the technology market. AI will be a feature of all of their Cloud applications with Mark Hurd (Oracle CEO) predicting that by 2025 all applications on the market with have AI factored in (fair prediction)

Further more Oracle's 2018 acquisiton of DataScience.com show's the strategic vision of the companies board.

Blockchain

Also picking up on the cyber security side of things, Oracle spoke a lot about the role that Blockchain will play in enterprises going forwards. Oracle’s Blockchain cloud platform offering gives enterprises a rapid and simplified deployment of blockchain networks.

Final Thoughts

In summary, this was a really good event for Oracle to run and I really hope they continue to do so. It gave a chance for the Oracle community to come together again and in a growingly competitive market for Cloud, Oracle needs to keep investing in its community going forwards.

Conceptually Oracle has some very timely cloud offerings in their armoury and it will be interesting to come back in 12 months time and see how the adoption of these applications & platforms is going.

Categories: BI & Warehousing

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

Mon, 2018-11-12 05:18
How Are My Users Connecting? Analyzing OAC and OBIEE entry points

Are you managing an OAC or OBIEE instance and your life is nice and easy since you feel like having everything in control: your users browse existing dashboards, create content via Analysis, Data Visualization or SmartView and deliver data via Agents or download dashboard content to use in Excel. You feel safe since you designed your platform to provide aggregated data and track every query via Usage Tracking.

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

But one day you start noticing new BI tools appearing in your company that provide similar KPIs to the ones you are already exposing and you start questioning where those data are coming from. Then suddently realize they are automagically sourcing data from your platform in ways you don't think you can control or manage.
Well, you're not alone, let me introduce you on how to monitor OAC/OBIEE connections via network sniffing and usage tracking in this new world of self-service BI platforms.

A Bit of History

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

Anybody who has been for some time in the Analytics market will be able to recognise the situation described in the image above as a direct experience: multiple people having different views on a KPI calculation and therefore results. Back in the days, that problem was strictly related to the usage of Excel as BI tool and the fact that everybody was directly accessing raw data to build up their own KPIs.

Centralised BI Solutions

The landscape started to change when Centralised Enterprise BI Solutions (like OBIEE or in more recent times OAC ) started appearing and being developed in the market. The Key point of those solutions was to provide a unique source of truth for a certain set of KPIs across the organization.

However, the fact that those tools were centralised in the hands of the IT department, meant most of the times a lack of agility for the Business Departments: every new KPI had to be well defined, understood, documented, implemented by IT, validated and delivered in a process that could take months. Even when the development phase was optimised, via DevOps practices for example, time was still burned due to the communication and coordination efforts which are necessary between Business and IT teams.

Self Service BI Platforms

In order to solve the agility problem, in the last few years a new bottom-up approach has been suggested by the latest set of self-service Analytics tools: a certain set of KPIs is developed locally directly by the Business Department and then, once the KPI has been validated and accepted, its definition and the related data model is certified to allow a broader audience to use it.

Oracle has historically been a leader on the Centralised BI platform space with OBIEE being the perfect tool for this kind of reporting. In recent years, Data Visualization closed the gap of the Self-Service Analytics, providing tools for data preparation, visualization and machine learning directly in the hands of Business Users. Oracle Analytics Cloud (OAC) combines in a unique tool both the traditional centralised BI as well as the self-service analytics providing the best option for each use case.

What we have seen at various customer is a proliferation of BI tools being acquired from various departments: most of the time a centralised BI tool is used side by side with one or more self-service with little or no control over data source usage or KPI calculation.

The transition from old-school centralised BI platform to the new bottom-up certified systems is not immediate and there is no automated solution for it. Moreover, centralised BI platforms are still key in most corporates with big investments associated with them in order to get fully automated KPI management. A complete rewrite of the well-working legacy BI solutions following the latest BI trends and tools is not a doable/affordable on short-term and definitively not a priority for the business.

A Mix of The Two

So, how can we make the old and the new world coexist in a solution which is efficient, agile, and doesn't waste all well defined KPIs that are already produced? The solution that we are suggesting more and more is the re-usage of the central BI solution as a curated data source for the self-service tools.

Just imagine the case where we have a very complex Churn Prediction formula, based on a series of fields in a star schema that has been already validated and approved by the Business. Instead of forcing a new user to rewrite the whole formula from the base tables we could just offer, based on the centralised BI system, something like:

Select "Dim Account"."Account Code", "Fact Churn"."Churn Prediction" from "Churn"

There are various benefits to this:

  • No mistakes in formula recalculation
  • No prior knowledge of joining Condition, filtering, aggregation needed
  • Security system inheritance if specific filters or security-sensitive fields were defined, those settings will still be valid.
  • No duplication of code, with different people accessing various versions of the same KPIs.

Using the centralised BI system to query existing KPIs and mashing-up with new datasources is the optimal way of giving agility to the business but at the same time certifying the validity of the core KPIs.

OBIEE as a datasource

A lot of our customers have OBIEE as their own centralised BI reporting tool and are now looking into expanding the BI footprint with a self-service tool. If the chosen tool is Oracle Data Visualization then all the hard work is already done: it natively interfaces with OBIEE's RPD and all the Subject Areas are available together with the related security constraints since the security system is shared.

But what if the self-service tool is not Oracle Data Visualization? How can you expose OBIEE's Data to an external system? Well, there are three main ways:

The first one is by using web-services: OAC (OBIEE) provides a set of SOAP web-services that can be called via python for example, with one of them being executeSQLQuery. After passing the SQL in a string the results are returned in XML format. This is the method used for example by Rittman Mead Insights. SOAP Web-services, however, can't directly be queried by BI tools this is why we created Unify to allow OBIEE connections from Tableau (which is now available for FREE!).
If you aren't using Tableau, a more generic connection method that can is accessible by most of BI tools is via ODBC: OBIEE's BIServer (the component managing the RPD) can be exposed via ODBC by installing the AdminTool Drivers and creating an ODBC connection.
How Are My Users Connecting? Analyzing OAC and OBIEE entry points

Please note that the ODBC method is only available if the BIServer port is not blocked by firewalls. Once the port is open, the ODBC datasource can be queried by any tool having ODBC querying capabilities.

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

The last method is obviously Smartview, that allows sourcing from pre-existing or the creation of new Analysis with the option of refreshing the data on demand. Smartview is the perfect choice if your target Analytical tool is one of the two supported: Excel or Powerpoint.

Good for all use-cases?

Are the above connection methods good in every situation?

via GIPHY

The solutions described above work really well if you let OBIEE do its job: KPI calculations, aggregations, group by and joins or, in other terms, if your aim is to extract aggregated data. OBIEE is not a massive data exporting tool, if your plan is to export 100k rows (just a random number) every time then you may need to rethink about the solution since you:

  • will experience poor performances since you're adding a layer (OAC) between where the data resides (DB) and yourself
  • put the OBIEE environment under pressure since it has to run the query and transform the resultset in XML before pushing it to you

If that's the use case you're looking for then you should think about alternative solutions like sourcing the data directly from the database and possibly moving your security settings there.

How Can You Monitor Who is Connecting?

Let's face the reality, in our days everyone tries to make his work as easy as it can. Business Analysts are tech savvy and configurations and connection options are just a google search away. Stopping people from finding alternative solutions to accelerate their work is counterproductive: there will be tension since the analyst work is slowed down thus the usage of the centralized BI platform will decline quickly since analysts will just move to other platforms giving them the required flexibility.

Blocking ports and access methods is not the correct way of providing a (BI) service that should be centrally controlled but used by the maximum amount of people in an organization. Therefore monitoring solutions should be created in order to:

  • Understand how users are interacting with the platform
  • Provide specific workarounds in cases when there is a misuse of the platform

But how can you monitor user's access? Well, you really have two options: network sniffing or usage tracking.

Network Sniffing

Let's take the example of ODBC connections directly to BI Server (RPD). Those connections can be of three main types:

  • From/To the Presentation Service in order to execute queries in the front-end (e.g. via analysis) and to retrieve the data
  • From OBI administrators Admin Tool to modify OAC/OBIEE's metadata but this shouldn't happen in Production systems
  • From End Users ODBC connections to query OAC/OBIEE data with other BI tools

In the type one connection both the sender and receiver (Presentation and BI server) share the same IP (or IPs in case of cluster), while in the second and third type (the one we are interested) the IP address of the packet sender/receiver is different from the IP of the OBIEE server.
We can then simply use a Linux network analysis tool like tcpdump to check the traffic. With the following command, we are able to listen on port 9516 (the BI Server one) and exclude all the traffic generated from the Presentation Server (IP 192.168.1.30)

sudo tcpdump  -i eth0 -ennA 'port 9516' | grep -v "IP 192.168.1.30" 

The following is a representation of the traffic

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

We can clearly see the traffic passing between the user's machine (IP ending with 161 and the BI Server port (IP ending with 30 and port 56639).
This is the first tracking effort and it already provides us with some information (like users IP address) however is limited to ODBC and doesn't tell us the username. Let's see now what can we get from Usage Tracking.

Usage Tracking

We wrote a lot about Usage Tracking, how to enhance and how to use it so I don't want to repeat that. A very basic description of it: is a database table containing statistics of every query generated by OBIEE.
The "every query" bit is really important: the query doesn't have to be generated by the standard front-end (analytics), but a record is created even if is coming from Smartview or with a direct ODBC access to the BIServer.

Looking into S_NQ_ACCT (the default table name) there is an interesting field named QUERY_SRC_CD that, from Oracle documentation contains

The source of the request.

Checking the values for that table we can see:
How Are My Users Connecting? Analyzing OAC and OBIEE entry points
Analysing the above data in Detail

  • DashboardPrompt and ValuePrompt are related to display values in Prompts
  • DisplayValueMap, Member Browser Display Values and Member Browser Path to Value seem related to items display when creating analysis
  • Report is an Analysis execution
  • SOAP is the webservices
  • rawSQL is the usage of Raw SQL (shouldn't be permitted)

So SOAP identifies the webservices, what about the direct ODBC connections? they don't seem to be logged! Not really, looking more in detail in a known dataset, we discovered that ODBC connections are marked with NULL value in QUERY_SRC_CD together with some other traffic.
Looking into the details of the Null QUERY_SRC_CD transactions we can see two types of logs:

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

  • The ones starting with SELECT are proper queries sent via an ODBC call
  • The ones starting with CALL are requests from the Presentation Server to the BI Server

Summarizing all the findings, the following query should give you the list of users accessing OBIEE via either ODBC, SOAP or using rawSQL.

SELECT DISTINCT 
  USER_NAME,
  NVL(QUERY_SRC_CD, 'RPD ODBC') SOURCE, 
  TRUNC(START_TS) TS
FROM S_NQ_ACCT 
WHERE 
   AND 
    (
     QUERY_SRC_CD IS NULL OR 
     UPPER(QUERY_SRC_CD) IN ('SOAP', 'RAWSQL')
    ) 
   AND QUERY_TEXT NOT LIKE '{CALL%'
ORDER BY 3 DESC;

You can, of course, do more than this, like analysing query volumes (ROW_COUNT column) and Subject Areas afflicted in order to understand any potential misuse of the platform!

Real Example

Let's see an example I'll try logging in via ODBC and executing a query. For this I'm using RazorSQL a SQL query tool and OBIEE, exactly the same logs can be found in Oracle Analytics Cloud (OAC) once the Usage Tracking is enabled so, administrators, don't afraid your job is not going to extinct right now.

Small note: Usage Tracking may be available only on non-Autonomous version of Oracle Analytics Cloud, since some parts of the setup need command line access and server configuration changes which may not available on the Autonomous version

Setup

First a bit of a setup: In order to connect to OAC all you need to do is to download OBIEE's Administration Tool, install it and create an ODBC connection. After this we can open RazorSQL and add create a connection.

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

Then we need to specify our connection details, by selecting Add Connection Profile, specifying OTHER as Connection Profile, then selecting ODBC as Connection Type and filling in the remaining properties. Please note that:

  • Datasource Name: Select the ODBC connection entry created with the Admin tool drivers
  • Login/Password: Enter the OAC/OBIEE credentials

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

Querying and Checking the Logs

Then it's time to connect. As expected we see in RazorSQL the list of Subject Areas as datapoints which depend on the security settings configured in Weblogic and RPD.

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

The Login action is not visible from Usage Tracking S_NQ_ACCT table, it should be logged in the S_NQ_INITBLOCK if you have Init Blocks associated with the login. Let's start checking the data and see what's going to happen. First of all, let's explore which Tables and Columns are part of the Usage Tracking Subject Area, by clicking on the + Icon next to it.

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

The various Dims and Facts are exposed as Tables by the ODBC driver, now let's see if this action is logged in the database with the query

SELECT USER_NAME, 
  QUERY_TEXT, 
  QUERY_SRC_CD, 
  START_TS, 
  END_TS, 
  ROW_COUNT 
FROM S_NQ_ACCT

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

We can clearly see that even checking the columns within the Measures table is logged as ODBC call, with the column QUERY_SRC_CD as Null as expected.
Now let's try to fire a proper SQL, we need to remember that the SQL we are writing needs to be in the Logical SQL syntax. An example can be

select `Topic`.`Repository Name` from `Usage Tracking`

Which in RazorSQL returns the row

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

And in the database is logged as

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

We can see the user who run the query, the execution time (START_TS and END_TS) as well as the number of rows returned (ROW_COUNT).
We demonstrated that we now have all the info neccessary to start tracking any misuse of OAC/OBIEE as a datasource via ODBC connections.

Automating the Tracking

The easiest solution to properly track this type of OBIEE usage is to have an Agent that on daily basis reports users accessing OAC/OBIEE via ODBC. This solution is very easy to implement since all the Usage Tracking tables are already part of the Repository. Creating an Agent that reports on Usage Tracking rows having QUERY_SRC_CD field as Null, SOAP or rawSQL covers all the "non traditional" use-cases we have been talking about.

As mentioned above sourcing aggregated data from OAC/OBIEE should be considered a "good practice" since it provides the unique source of truth across the company. On the other side, exporting massive amount of data should be avoided since end-user performances will be slow and there will be an impact on OAC/OBIEE server. Thus setting an upper limit on the number of rows (e.g. ROW_COUNT > 100k) reported by the Agent could also mean identifying all the specific data-exports cases that should drive an impact assessment and a possible solution redesign.

Conclusion

Tools and Options in the Analytical Market are exploding and more and more we'll see companies using a number of different solutions for specific purposes. Centralised BI solutions, built over the years, provide the significant advantage of containing the unique source of truth across the company and should be preserved. Giving agility to Analysts and at the same time keeping the centrality of well defined and calculated KPIs is a challenge we'll face more and more often in the future.
OAC (or OBIEE on-premises) offers the duality of both Centralised and Self-Service Analytics methods together with a variety (webservices, ODBC, Smartview) of connecting methods which makes it the perfect cornerstone of a company analytical system.
Tracking down usage, discovering potential misuse of the platform is very easy so inefficiencies can be addressed quickly to provide adequate agility and performance to all analytical business cases!

Categories: BI & Warehousing

Exciting News for Unify

Tue, 2018-11-06 02:37
Announcement: Unify for Free

We are excited to announce we are going to make Unify available for free. To get started send an email to unify@rittmanmead.com, we will ask you to complete a short set of qualifying questions, then we can give you a demo, provide a product key and a link to download the latest version.

The free version of Unify will come with no support obligations or SLAs. On sign up, we will give you the option to join our Unify Slack channel, through which you can raise issues and ask for help.

If you’d like a supported version, we have built a special Expert Service Desk package for Unify which covers

  • Unify support, how to, bugs and fixes
  • Assistance with configuration issues for OBIEE or Tableau
  • Assistance with user/role issues within OBIEE
  • Ad-hoc support queries relating to OBIEE, Tableau and Unify

Beyond supporting Unify, the Expert Service Desk package can also be used to provide technical support and expert services for your entire BI and analytics platform, including:

  • An agreed number of hours per month for technical support of Oracle and Tableau's BI and DI tools
  • Advisory, strategic and roadmap planning for your platform
  • Use of any other Rittman Mead accelerators including support for our other Open Source tools and DevOps Developer Toolkits
  • Access to Rittman Mead’s On Demand Training
New Release: Unify 10.0.17

10.0.17 is the new version of Unify. This release doesn’t change how Unify looks and feels, but there are some new features and improvements under the hood.

The most important feature is that now you can get more data from OBIEE using fewer resources. While we are not encouraging you to download all your data from OBIEE to Tableau all time (please use filters, aggregation etc.), we realise that downloading the large datasets is sometimes required. With the new version, you can do it. Hundreds of thousands of rows can be retrieved without causing your Unify host to grind to a halt.

The second feature we would like to highlight is that now you can use OBIEE instances configured with self-signed SSL certificates. Self-signed certificates are often used for internal systems, and now Unify supports such configurations.

The final notable change is that you can now run Unify Server as a Windows service. It wasn't impossible to run Unify Server at system startup before, but it is even easier.

And, of course, we fixed some bugs and enhanced the logging. We would like to see our software function without bugs, but sometimes they just happen, and when they do, you will get a better explanation of what happened.

On most platforms, Unify Desktop should auto update, if it doesn’t, then please download manually.

Unify is 100% owned and maintained by Rittman Mead Consulting Ltd, and while this announcement makes it available for free, all copies must be used under an End User Licence Agreement (EULA) with Rittman Mead Consulting Ltd.

Categories: BI & Warehousing

Fixing* Baseline Validation Tool** Using Network Sniffer

Wed, 2018-10-17 05:22

* Sort of
** Not exactly

In the past, Robin Moffatt wrote a number of blogs showing how to use various Linux tools for diagnosing OBIEE and getting insights into how it works (one, two, three, ...). Some time ago I faced a task which allowed me to continue Robin's cycle of posts and show you how to use Wireshark to understand how a certain Oracle tool works and how to search for the solution of a problem more effectively.

To be clear, this blog is not about the issue itself. I could simply write a tweet like "If you faced issue A then patch B solves it". The idea of this blog is to demonstrate how you can use somewhat unexpected tools and get things done.

Obviously, my way of doing things is not the only one. If you are good in searching at My Oracle Support, you possibly can do it even faster, but what is good about my way (except for it is mine, which is enough for me) is that it doesn't involve uneducated guessing. I do an observation and get a clarified answer.

Most of my blogs have disclaimers. This one is not an exception, while its disclaimer is rather small. There is still no silver bullet. This won't work for every single problem in OBIEE. I didn't say this.

Now, let's get started.

The Task

The problem was the following: a client was upgrading its OBIEE system from 11g to 12c and obviously wanted to test for regression, making sure that the upgraded system worked exactly the same as the old one. Manual comparison wasn't an option since they have hundreds or even thousands of analyses and dashboards, so Oracle Baseline Validation Tool (usually called just BVT) was the first candidate as a solution to automate the checks.

Using BVT is quite simple:

  • Create a baseline for the old system.
  • Upgrade
  • Create a new baseline
  • Compare them
  • ???
  • Profit! Congratulations. You are ready to go live.

Right? Well, almost. The problem that we faced was that BVT Dashboards plugin for 11g (a very old 11.1.1.7.something) gave exactly what was expected. But for 12c (12.2.1.something) we got all numbers with a decimal point even while all analyses had "no decimal point" format. So the first feeling we got at this point was that BVT doesn't work well for 12c and that was somewhat disappointing.

SPOILER That wasn't true.

I made a simple dashboard demonstrating the issue.

OBIEE 11g

11g-dash-vs-bvt
Measure values in the XML produced by BVT are exactly as on the dashboard. Looks good.

OBIEE 12c

12c-dash-vs-bvt-1
Dashboard looks good, but values in the XML have decimal digits.

failed

As you can see, the analyses are the same or at least they look very similar but the XMLs produced by BVT aren't. From regression point of view this dashboard must get "DASHBOARDS PASSED" result, but it got "DASHBOARDS DIFFERENT".

Reading the documentation gave us no clear explanation for this behaviour. We had to go deeper and understand what actually caused it. Is it BVT screwing up the data it gets from 12c? Well, that is a highly improbable theory. Decimals were not simply present in the result but they were correct. Correct as in "the same as stored in the database", we had to reject this theory.
Or maybe the problem is that BVT works differently with 11g and 12c? Well, this looks more plausible. A few years have passed since 11.1.1.7 was released and it would not be too surprising if the old version and the modern one had different APIs used by BVT and causing this problem. Or maybe the problem is that 12c itself ignores formatting settings. Let's find out.

The Tool

Neither BVT, nor OBIEE logs gave us any insights. From every point of view, everything was working fine. Except that we were getting 100% mismatch between the source and the target. My hypothesis was that BVT worked differently with OBIEE 11g and 12c. How can I check this? Decompiling the tool and reading its code would possibly give me the answer, but it is not legal. And even if it was legal, the latest BVT size is more than 160 megabytes which would give an insane amount of code to read, especially considering the fact I don't actually know what I'm looking for. Not an option. But BVT talks to OBIEE via the network, right? Therefore we can intercept the network traffic and read it. Shall we?

There are a lot of ways to do it. I work with OBIEE quite a lot and Windows is the obvious choice for my platform. And hence the obvious tool for me was Wireshark.

Wireshark is the world’s foremost and widely-used network protocol analyzer. It lets you see what’s happening on your network at a microscopic level and is the de facto (and often de jure) standard across many commercial and non-profit enterprises, government agencies, and educational institutions. Wireshark development thrives thanks to the volunteer contributions of networking experts around the globe and is the continuation of a project started by Gerald Combs in 1998.

What this "About" doesn't say is that Wireshark is open-source and free. Which is quite nice I think.

Installation Details

I'm not going to go into too many details about the installation process. It is quite simple and straightforward. Keep all the defaults unless you know what you are doing, reboot if asked and you are fine.

If you've never used Wireshark or analogues, the main question would be "Where to install it?". The answer is pretty simple - install it on your workstation, the same workstation where BVT is installed. We're going to intercept our own traffic, not someone else's.

A Bit of Wireshark

Before going to the task we want to solve let's spend some time familiarizing with Wireshark. Its starting screen shows all the network adapters I have on my machine. The one I'm using to connect to the OBIEE servers is "WiFi 2".

Screenshot-2018-10-09-13.50.44

I double-click it and immediately see a constant flow of network packets flying back and forth between my computer and local network machines and the Internet. It's a bit hard to see any particular server in this stream. And "a bit hard" is quite an understatement, to be honest, it is impossible.

wireshark

I need a filter. For example, I know that my OBIEE 12c instance IP is 192.168.1.226. So I add ip.addr==192.168.1.226 filter saying that I only want to see traffic to or from this machine. Nothing to see right now, but if I open the login page in a browser, for example, I can see traffic between my machine (192.168.1.25) and the server. It is much better now but still not perfect.

Screenshot-2018-10-09-14.08.52

If I add http to the filter like this http and ip.addr==192.168.1.226, I definitely can get a much more clear view.

For example, here I opened http://192.168.1.226:9502/analytics page just like any other user would do. There are quite a lot of requests and responses. The browser asked for /analytics URL, the server after a few redirects replied what the actual address for this URL is login.jsp page, then browser requested /bi-security-login/login.jsp page using GET method and got the with HTTP code 200. Code 200 shows that there were no issues with the request.

startpage

Let's try to log in.

login

The top window is a normal browser and the bottom one is Wireshark. Note that my credentials been sent via clear text and I think that is a very good argument in defence of using HTTPS everywhere.

That is a very basic use of Wireshark: start monitoring, do something, see what was captured. I barely scratched the surface of what Wireshark can do, but that is enough for my task.

Wireshark and BVT 12c

The idea is quite simple. I should start capturing my traffic then use BVT as usual and see how it works with 12c and then how it works with 11g. This should give me the answer I need.

Let's see how it works with 12c first. To make things more simple I created a catalogue folder with just one analysis placed on a dashboard.

bvt-dashboard-1

It's time to run BVT and see what happens.

Screenshot-2018-10-11-17.49.59

Here is the dataset I got from OBIEE 12c. I slightly edited and formatted it to make easier to read, but didn't change anything important.

dataset12--1

What did BVT do to get this result? What API did it use? Let's look at Wireshark.

Screenshot-2018-10-11-19.09.27

First three lines are the same as with a browser. I don't know why it is needed for BVT, but I don't mind. Then BVT gets WSDL from OBIEE (GET /analytics-ws/saw.dll/wsdl/v6/private). There are multiple pairs of similar query-response flying back and forth because WSDL is big enough and downloaded in chunks. A purely technical thing, nothing strange or important here.
But now we know what API BVT uses to get data from OBIEE. I don't think anyone is surprised that it is Web Services API. Let's take a look at Web Services calls.

First logon method from nQSessionService. It logs into OBIEE and starts a session.

Screenshot-2018-10-11-19.36.59

Next requests get catalogue items descriptions for objects in my /shared/BVT folder. We can see a set of calls to webCatalogServce methods. These calls are reading my web catalogue structure: all folders, subfolders, dashboard and analysis. Pretty simple, nothing really interesting or unexpected here.

ws01

Then we can see how BVT uses generateReportSQLResult from reportService to get logical SQL for the analysis.

Screenshot-2018-10-11-19.42.07

And gets analysis' logical SQL as the response.

Screenshot-2018-10-11-19.45.10

And the final step - BVT executes this SQL and gets the data. Unfortunately, it is hard to show the data on a screenshot, but the line starting with [truncated] is the XML I showed before.

Screenshot-2018-10-12-12.19.58

And that's all. That's is how BVT gets data from OBIEE.

I did the same for 11g and saw absolutely the same procedure.

Screenshot-2018-10-11-21.01.35

My initial theory that BVT may have been using different APIs for 11g and 12c was busted.

From my experiment, I found out that BVT used xmlViewService to actually get the data. And also I know now that it uses logical SQL for getting the data. Looking at the documentation I can see that xmlViewService has no options related to any formatting. It is a purely data-retrieval service. It can't preserve any formatting and supposed to give only the data. But hey, I've started with the statement "11g preserves formatting", how is that possible? Well, that was a simple coincidence. It doesn't.

In the beginning, I had very little understanding of what keywords to use on MoS to solve the issue. "BVT for 12c doesn't preserve formatting"? "BVT decimal part settings"? "BVT works differently for 11g and 12c"? Now I have something much better - "executeSQLQuery decimal". 30 seconds of searching and I know the answer.

mos-1

This was fixed in 11.1.1.9, but there is a patch for 11.1.1.7.some_of_them. The patch fixes an 11g issue which prevents BVT from getting decimal parts of numbers.

pass

As you may have noticed I had no chance of finding this using my initial problem description. Nether BVT, nor 12g or 11.1.1.7 were mentioned. This thread looks completely unrelated to the issue, I had zero chances to find it.

Conlusion

OBIEE is a complex software and solving issues is not always easy. Unfortunately, no single method is enough for solving all problems. Usually, log files will help you. But when something works but not the way you expect, log files can be useless. In my case BVT was working fine, 11g was working fine, 12c was working fine too. Nothing special to write to logs was happening. That is why sometimes you may need unexpected tools. Just like this. Thanks for reading!

Categories: BI & Warehousing

ODC Appreciation Day: Oracle Cloud PSM Cli

Thu, 2018-10-11 03:11
 Oracle Cloud PSM Cli

Oracle Developer Community (ODC) Appreciation Day (previously know as OTN Appreciation Day) is a day, started from an initiative of Tim Hall, where everyone can share their Thanks to the Oracle community by writing about a favourite product, an experience, a story related to Oracle technology.

 Oracle Cloud PSM Cli

Last year I wrote about OBIEE Time Hierarchies and how they are very useful to perform time comparison, shifts, and aggregations.

This year I want to write about Oracle Paas Service Manager (PSM) Client!
I've already written a blog post about it in detail, basically Oracle PSM allows Oracle cloud administrators to manage their instances via command line instead of forcing them to use the Web-UI.

 Oracle Cloud PSM Cli

PSM Cli allows you to create an Oracle Analytics Cloud instance by just calling

psm analytics create-service -c <CONFIG_FILE> -of <OUTPUT_FORMAT>

and passing a JSON <CONFIG_FILE> which can easily be downloaded after following the creation process in the Web-UI, a bit like the response file in on-premises OBIEE can be saved and customised for future reuse after the first UI installation. Examples of the PSM JSON payloads can be found here.

OAC Instances can also easily be started/stopped/restarted with the command

psm analytics start/stop/restart -s <INSTANCE_NAME>

And the status of each command tracked with

psm analytics operation-status -j <JOB_ID>

As mentioned in my previous post, PSM Cli opens also the doors for instance management automation which is a requirement for providing cost-effective fully isolated feature-related OAC instances useful when thinking about DevOps practices. The fact that PSM Cli is command line, means that it can be integrated in any automation tool like Jenkins and thus integrated in any DevOps flow being designed in any company.

So Thank you, Oracle, for enabling such automation with PSM Cli!

Follow the #ThanksODC hashtag on Twitter to check which post have been published on the same theme!

Categories: BI & Warehousing

OAC 18.3.3: New Features

Fri, 2018-09-21 07:58
 New Features

I believe there is a hidden strategy behind Oracle's product release schedule: every time I'm either on holidays or in a business trip full of appointments a new version of Oracle Analytics Cloud is published with a huge set of new features!

 New Features

OAC 18.3.3 went live last week and contains a big set of enhancements, some of which were already described at Kscope18 during the Sunday Symposium. New features are appearing in almost all the areas covered by OAC, from Data Preparation to the main Data Flows, new Visualization types, new security and configuration options and BIP and Essbase enhancements. Let's have a look at what's there!

Data Preparation

A recurring theme in Europe since last year is GDPR, the General Data Protection Regulation which aims at protecting data and privacy of all European citizens. This is very important in our landscape since we "play" with data on daily basis and we should be aware of what data we can use and how.
Luckily for us now OAC helps to address GDPR with the Data Preparation Recommendations step: every time a dataset is added, each column is profiled and a list of recommended transformations is suggested to the user. Please note that Data Preparation Recommendations is only suggesting changes to the dataset, thus can't be considered the global solution to GDPR compliance.
The suggestion may include:

  • Complete or partial obfuscation of the data: useful when dealing with security/user sensitive data
  • Data Enrichment based on the column data can include:
    • Demographical information based on names
    • Geographical information based on locations, zip codes

 New Features

Each of the suggestion applied to the dataset is stored in a data preparation script that can easily be reapplied if the data is updated.

 New Features

Data Flows

Data Flows is the "mini-ETL" component within OAC which allows transformations, joins, aggregations, filtering, binning, machine learning model training and storing the artifacts either locally or in a database or Essbase cube.
The dataflows however had some limitations, the first one was that they had to be run manually by the user. With OAC 18.3.3 now there is the option to schedule Data Flows more or less like we were used to when scheduling Agents back in OBIEE.

 New Features

Another limitation was related to the creation of a unique Data-set per Data Flow which has been solved with the introduction of the Branch node which allows a single Data Flow to produce multiple data-sets, very useful when the same set of source data and transformations needs to be used to produce various data-sets.

 New Features

Two other new features have been introduced to make data-flows more reusable: Parametrized Sources and Outputs and Incremental Processing.
The Parametrized Sources and Outputs allows to select the data-flow source or target during runtime, allowing, for example, to create a specific and different dataset for today's load.

 New Features

The Incremental Processing, as the name says, is a way to run Data Flows only on top of the data added since the last run (Incremental loads in ETL terms). In order to have a data flow working with incremental loads we need to:

  • Define in the source dataset which is the key column that can be used to indicate new data (e.g. CUSTOMER_KEY or ORDER_DATE) since the last run
  • When including the dataset in a Data Flow enable the execution of the Data Flow with only the new data
  • In the target dataset define if the Incremental Processing replaces existing data or appends data.

Please note that the Incremental Load is available only when using Database Sources.

Another important improvement is the Function Shipping when Data Flows are used with Big Data Cloud: If the source datasets are coming from BDC and the results are stored in BDC, all the transformations like joining, adding calculation columns and filtering are shipped to BDC as well, meaning there is no additional load happening on OAC for the Data Flow.

Lastly there is a new Properties Inspector feature in Data Flow allowing to check the properties like name and description as well as accessing and modifying the scheduling of the related flow.

 New Features

Data Replication

Now is possible to use OAC to replicate data from a source system like Oracle's Fusion Apps, Talend or Eloqua directly into Big Data Cloud, Database Cloud or Data Warehouse Cloud. This function is extremely useful since allows decoupling the queries generated by the analytical tools from the source systems.
As expected the user can select which objects to replicate, the filters to apply, the destination tables and columns, and the load type between Full or Incremental.

Project Creation

New visualization capabilities have been added which include:

  • Grid HeatMap
  • Correlation Matrix
  • Discrete Shapes
  • 100% Stacked Bars and Area Charts

In the Map views, Multiple Map Layers can now be added as well as Density and Metric based HeatMaps, all on top of new background maps including Baidu and Google.

 New Features

Tooltips are now supported in all visualizations, allowing the end user to add measure columns which will be shown when over a section of any graph.

 New Features

The Explain feature is now available on metrics and not only on attributes and has been enhanced: a new anomaly detection algorithm identifies anomalies in combinations of columns working in the background in asynchronous mode, allowing the anomalies to be pushed as soon as they are found.

A new feature that many developers will appreciate is the AutoSave: we are all used to autosave when using google docs, the same applies to OAC, a project is saved automatically at every change. Of course this feature can be turn off if necessary.
Another very interesting addition is the Copy Data to Clipboard: with a right click on any graph, an option to save the underline data to clipboard is available. The data can then natively be pasted in Excel.

Did you create a new dataset and you want to repoint your existing project to it? Now with Dataset replacement it's just few clicks away: you need only to select the new dataset and re-map all the columns used in your current project!

 New Features

Data Management

The datasets/dataflows/project methodology is typical of what Gartner defined as Mode 2 analytics: analysis done by a business user whitout any involvement from the IT. The step sometimes missing or hard to be performed in self-service tools is the publishing: once a certain dataset is consistent and ready to be shared, it's rather difficult to open it to a larger audience within the same toolset.
New OAC administrative options have been addressing this problem: a dataset Certification by an administrator allows a certain dataset to be queried via Ask and DayByDay by other users. There is also a dataset Permissions tab allowing the definition of Full Control, Edit or Read Only access at user or role level. This is the way of bringing the self service dataset back to corporate visibility.

 New Features

A Search tab allows a fine control over the indexing of a certain dataset used by Ask and DayByDay. There are now options to select when then indexing is executed as well as which columns to index and how (by column name and value or by column name only).

 New Features

BIP and Essbase

BI Publisher was added to OAC in the previous version, now includes new features like a tighter integration with the datasets which can be used as datasources or features like email delivery read receipt notification and compressed output and password protection that were already available on the on-premises version.
There is also a new set of features for Essbase including new UI, REST APIs, and, very important security wise, all the external communications (like Smartview) are now over HTTPS.
For a detailed list of new features check this link

Conclusion

OAC 18.3.3 includes an incredible amount of new features which enable the whole analytics story: from self-service data discovery to corporate dashboarding and pixel-perfect formatting, all within the same tool and shared security settings. Options like the parametrized and incremental Data Flows allows content reusability and enhance the overall platform performances reducing the load on source systems.
If you are looking into OAC and want to know more don't hesitate to contact us

Categories: BI & Warehousing

Looker for OBIEE Experts: Introduction and Concepts

Thu, 2018-08-23 08:28
 Introduction and Concepts

Recently I've been doing some personal study around various areas including streaming, machine learning and data visualization and one of the tools that got my attention is Looker. I've initially heard about Looker from a Drill to Detail podcast and increasingly been hearing about it in conferences and use cases together with other cloud solutions like BigQuery, Snowflake and Fivetran.

I decided to give it a try myself and, since most of my career was based on Oracle Business Intelligence (OBI) writing down a comparison between the tools that could help others sharing my experience getting introduced to Looker.

OBIEE's Golden Feature: The Semantic Model

As you probably know if you have been working with OBIEE for some time the centrepiece of its architecture is the Semantic Model contained in the Repository (RPD)

 Introduction and Concepts

In the three layers of the RPD, we model our source data (e.g. database tables) into attributes, metrics, hierarchies which can then be easily dragged and dropped by the end-user in the analysis or data visualization.

I called the RPD "OBIEE's Golden Feature" because to me it's the main benefit of the platform: abstracting the data complexity from end-users and, at the same time, optimizing the query definition to take care of all the features that could be set in the datasource. The importance of the RPD is also its centrality: within the traditional OBIEE all Analysis and Dashboard had to be based on Subject Areas exposed by the RPD meaning that the definition of the metrics was done in a unique place in a consistent manner and then spread across all the reporting providing the unique source of truth for the important KPIs in the company typical of what Gartner calls the Mode 1 Analytics.

RPD Development Speed Limitation and Mode 2 Analytics

The RPD is a centralized binary object within the OBIEE infrastructure: in order to develop and test a full OBIEE instance is required, and the merges between different streams are natively performed via the RPD's admin tool.

This complexity unified to the deep knowledge required to correctly build a valid semantic model limits the number of people being able to create and publish new content thus slowing down the process from data to insights typical of the centralized Mode 1 Analytic platform provided centrally by IT teams. Moreover, RPD development is entirely point-and-click within the admintool which is somehow considered slow and old fashion in a world of scripting, code versioning and git merging. Several solutions are out in the market (including Rittman Mead Developer Toolkit) to enhance the agility of the development but still, the skills and the toolset required to develop new content makes it a purely IT manageable solution.

In order to overcome this limitation several tools like Tableau, QlikView or Oracle's Data Visualization (included in OAC or in the Desktop version) give all the power in the ends of the end-user: from data-sources to graphing, the tools allow an end-to-end data discovery to visualization journey. The problem with those tools (called Mode 2 Analytics by Gartner) is that there is no central definition of the KPI since it's demanded to every analyst. All those tools are addressing the problem by providing some sort of datasource certification allowing a datasource to be visible and reusable publicly only when it's validated centrally. Again, for most of those tools, the modelling is done in a visual format, which makes it difficult to debug, version control and automate. I've been speaking about this subject in my presentation "DevOps and OBIEE do it before it's too late".

What if we could provide the same centralized source of truth data modelling with an easily scriptable syntax that can be developed from business users without any deep knowledge of SQL or source tables? Well, what we just described is LookML!

LookML

LookerML takes the best part of OBIEE: the idea of a modelling layer and democratizes it in order to be available to all business user with a simple language and set of concepts. Moreover, the code versioning is embedded in the tool, so there's no need to teach git branch, commit, push or pull to non-IT people.

So, what are the concepts behing LookerML and how can you get familiar with it when comparing it to the medatada modelling in the RPD?

LookML Concepts

Let's start from the basic of the RPD modelling: a database table. In LookerML each table is represented by an object called View (naming is a bit confusing). Moreover, LookerML's Views can be used not only to map existing database tables but also to create new tables based on existing content and a SQL definition, like the opaque views in OBIEE. On top of this LookML allows the phisicalization of those objects (into a table) and the definition of a schedule for the refresh. This concept is very useful when aggregates are needed, the aggregate definition (SQL) is defined within the LookML View together with the related refresh schedule.

 Introduction and Concepts

The View itself defines only the source, a bit like the RPD's physical layer, the next step is defining how multiple Views interact within each other, or, in OBIEE terms, the Business Layer. In LookML there is an entity called Explores and is the place where we can define which Views we want to group together, and what's the linkage between them. Multiple Explores are defined in a Model, which should be unique per database. So, in OBIEE words, a Model can be compared to a Business Model with Explores being a subset of Facts and Dimensions grouped in a Subject Area.

 Introduction and Concepts

Ok, all "easy" so far, but where do we map the columns? and where do we set the aggregations? As you might expect both are mapped within a LookML View into Fields. Fields is a generic term which includes in both metrics and attributes, LookML naming is the below:

  • Dimension: in OBIEE's terms attributes of a dimension. The terminology is confusing since in LookML the Dimension is the column itself while in OBIEE terms is the table. A Dimension can be a column value or a combination of multiple values (like OBIEE's BM Logical Sources formulas). A Dimension in LookML can't have any aggregation (as in OBIEE).
  • Measures: in OBIEE's terms a metric. The definition includes, the source formula in SQL syntax, the type of aggregation (min/max/count...) and the drill fields.
    Filters: this is not something usually defined in OBIEE's RPD, filters are a way of passing a user choice based on a column value back to an RPD calculation formula, a bit like, for the OBIEE experts, overriding session variables with dashboard prompt values.
  • Parameters: again this is not something usually defined in OBIEE's RPD, you can think a Parameter as a way of setting up variables function. E.g. a Parameter with values SUM, AVG, MIN, MAX could be used to change how a certain Measure is aggregated

All good so far? Stick with me and in the future we'll explore more about LookML syntax and Looker in general!

Categories: BI & Warehousing

Parsing Badly Formatted JSON in Oracle DB with APEX_JSON

Mon, 2018-08-20 05:38
Parsing Badly Formatted JSON in Oracle DB with APEX_JSON

After some blogging silence due to project work and holidays, I thought it was a good idea to do a write-up about a problem I faced this week. One of the tasks I was assigned was to parse a set of JSON files stored in an Oracle 12.1 DB Table.

As probably all of you already know JSON (JavaScript Object Notation) is a lightweight data-interchange format and is the format used widely when talking of web-services due to its flexibility. In JSON there is no header to define (think CSV as example), every field is defined in a format like "field name":"field value", there is no "set of required columns" for a JSON object, when a new attribute needs to be defined, the related name and value can be added to the structure. On top of this "schema-free" definition, the field value can either be

  • a single value
  • an array
  • a nested JSON object

Basically, when you start parsing JSON you feel like

Parsing Badly Formatted JSON in Oracle DB with APEX_JSON

The Easy Part

The task assigned wasn't too difficult, after reading the proper documentation, I was able to parse a JSON File like

{
 "field1": "abc",
 "field2": "cde"
}

Using a simple SQL like

select * 
 from TBL_NAME d,
 JSON_TABLE(d.text, '$' COLUMNS (
   field1 VARCHAR2(10) PATH '$.field1',
   field2 VARCHAR2(10) PATH '$.field2'
   )
 )

Parsing arrays is not very complex either, a JSON file like

{
 "field1": "abc",
 "field2": "cde",
 "field3": ["fgh","ilm","nop"]
}

Can be easily parsed using the NESTED PATH call

select * 
 from TBL_NAME d, 
 JSON_TABLE(d.text, '$' COLUMNS (
   field1 VARCHAR2(10) PATH '$.field1',
   field2 VARCHAR2(10) PATH '$.field2',
   NESTED PATH '$.field3[*]' COLUMNS (
     field3 VARCHAR2(10) PATH '$'
   )
 )
)

In case the Array contains nested objects, those can be parsed using the same syntax as before, for example, field4 and field5 of the following JSON

{
 "field1": "abc",
 "field2": "cde",
 "field3": [
           {
            "field4":"fgh",
            "field5":"ilm"
           },
           {
            "field4":"nop",
            "field5":"qrs"
           }
           ] 
}

can be parsed with

NESTED PATH '$.field3[*]' COLUMNS ( 
   field4 VARCHAR2(10) PATH '$.field4',
   field5 VARCHAR2(10) PATH '$.field5' 
)
...Where things got complicated

All very very easy with well-formatted JSON files, but then I faced the following

{ 
"field1": "abc", 
"field2": "cde", 
"field3": [
    {
     "field4": "aaaa", 
     "field5":{ 
           "1234":"8881", 
           "5678":"8893" 
          }
     },
     {
      "field4": "bbbb",  
      "field5":{ 
            "9876":"8881", 
            "7654":"8945",
            "4356":"7777"
          }
      } 
      ] 
}

Basically the JSON file started including fields with names representing the Ids meaning an association like Product Id (1234) is member of Brand Id (8881). This immediately triggered my reaction:

Parsing Badly Formatted JSON in Oracle DB with APEX_JSON

After checking the documentation again, I wasn't able to find anything that could help me parsing that, since all the calls were including a predefined PATH string, that in the case of Ids I couldn't know beforehand.

I then reached out to my network on Twitter

To all my @Oracle SQL friends out there: I need to parse a JSON object which has a strange format of {“name”:”abc”, “345678”:”123456”} with the 345678 being an Id I need to extract, any suggestions? none of the ones mentioned here seems to help https://t.co/DRWdGvCVfu pic.twitter.com/PfhtUnAeR4

— Francesco Tisiot (@FTisiot) 14 agosto 2018

That generated quite a lot of responses. Initially, the discussion was related to the correctness of the JSON structure, that, from a purist point of view should be mapped as

{ 
"field1": "abc",
"field2": "cde",
"field3": [ 
     {
      "field4": "aaaa", 
      "field5":
           { 
             "association": [
                  {"productId":"1234", "brandId":"8881"},
                  {"productId":"5678", "brandId":"8893"}
                  ]
           },
      },
      {
       "field4": "bbbb", 
       "field5":    
           {
             "association": [
                  {"productId":"9876", "brandId":"8881"},
                  {"productId":"7654", "brandId":"8945"},
                  {"productId":"4356", "brandId":"7777"}
                  ]
           }
      } 
      ]
}

basically going back to standard field names like productId and brandId that could be easily parsed. In my case this wasn't possible since the JSON format was aready widely used at the client.

Possible Solutions

Since a change in the JSON format wasn't possible, I needed to find a way of parsing it, few solutions were mentioned in the twitter thread:

  • Regular Expressions
  • Bash external table preprocessor
  • Java Stored functions
  • External parsing before storing data into the database

All the above were somehow discarded since I wanted to try achieving a solution based only on existing database functions. Other suggestion included JSON_DATAGUIDE and JSON_OBJECT.GET_KEYS that unfortunately are available only from 12.2 (I was on 12.1).

But, just a second before surrendering, Alan Arentsen suggested using APEX_JSON.PARSE procedure!

The Chosen One: APEX_JSON

The APEX_JSON package offers a series of procedures to parse JSON in a PL/SQL package, in particular:

  • PARSE: Parses a JSON formatted string contained in a VARCHAR2 or CLOB storing all the members.
  • GET_COUNT: Returns the number of array elements or object members
  • GET_MEMBERS: Returns the table of members of an object

You can already imagine how a combination of those calls can parse the JSON text defined above, let's have a look at the JSON again:

{ 
"field1": "abc", 
"field2": "cde", 
"field3": [
    {
     "field4": "aaaa", 
     "field5":{ 
           "1234":"8881", 
           "5678":"8893" 
          }
     },
     {
      "field4": "bbbb",  
      "field5":{ 
            "9876":"8881", 
            "7654":"8945",
            "4356":"7777"
          }
      } 
      ] 
}

The parsing process should iterate over the field3 entries (2 in this case), and for each entry, then iterate over the fields in field5 to get both the field name as well as the field value.
The number of field3 entries can be found with

APEX_JSON.GET_COUNT(p_path=>'field3',p_values=>j);

And the list of members of field5 with

APEX_JSON.GET_MEMBERS(p_path=>'field3[%d].field5',p_values=>j,p0=>i);

Note the p_path parameter set to field3[%d].field5 meaning that we want to extract the field5 from the nth row in field3. The rownumber is defined by p0=>i with i being the variable we use in our FOR loop.

The complete code is the following

DECLARE 
   j APEX_JSON.t_values; 
   r_count number;
   field5members   WWV_FLOW_T_VARCHAR2;
   p0 number;
   BrandId VARCHAR2(10);
BEGIN
APEX_JSON.parse(j,'<INSERT_JSON_STRING>');
# Getting number of field3 elements
r_count := APEX_JSON.GET_COUNT(p_path=>'field3',p_values=>j);
dbms_output.put_line('Nr Records: ' || r_count);

# Looping for each element in field3
FOR i IN 1 .. r_count LOOP
# Getting field5 members for the ith member of field3
 field5members := APEX_JSON.GET_MEMBERS(p_path=>'field3[%d].field5',p_values=>j,p0=>i);
# Looping all field5 members
 FOR q in 1 .. field5members.COUNT LOOP
# Extracting BrandId
   BrandId := APEX_JSON.GET_VARCHAR2(p_path=>'field3[%d].field5.'||field5members(q) ,p_values=>j,p0=>i);
# Printing BrandId and Product Id
   dbms_output.put_line('Product Id ="'||field5members(q)||'" BrandId="'||BrandId ||'"');
 END LOOP;
END LOOP;
   
END;

Note that, in order to extract the BrandId we used

APEX_JSON.GET_VARCHAR2(p_path=>'field3[%d].field5.'||field5members(q) ,p_values=>j,p0=>i);

Specifically the PATH is field3[%d].field5.'||field5members(q). As you can imagine we are appending the member name (field5members(q)) to the path described previously to extract the value, forming a string like field3[1].field5.1234 that will correctly extract the value associated.

Conclusion

Three things to save from this experience. The first is the usage of JSON_TABLE: with JSON_TABLE you can parse well-constructed JSON documents and it's very easy and powerful.
The second: APEX_JSON useful package to parse "not very well" constructed JSON documents, allows iteration across elements of JSON arrays and object members.
The last, which is becoming every day more relevant in my career, is the importance of networking and knowledge sharing: blogging, speaking at conferences, helping others in various channels allows you to know other people and be known with the nice side effect of sometimes being able with a single tweet to get help solving problems you may face!

Categories: BI & Warehousing

Making our way into Dremio

Wed, 2018-07-25 04:07

In an analytics system, we typically have an Operational Data Store (ODS) or staging layer; a performance layer or some data marts; and on top, there would be an exploration or reporting tool such as Tableau or Oracle's OBIEE. This architecture can lead to latency in decision making, creating a gap between analysis and action. Data preparation tools like Dremio can address this.

Dremio is a Data-as-a-Service platform allowing users to quickly query data, directly from the source or in any layer, regardless of its size or structure. The product makes use of Apache Arrow, allowing it to virtualise data through an in-memory layer, creating what is called a Data Reflection.

The intent of this post is an introduction to Dremio; it provides a step by step guide on how to query data from Amazon's S3 platform.

I wrote this post using my MacBook Pro, Dremio is supported on MacOS. To install it, I needed to make some configuration changes due to the Java version. The latest version of Dremio uses Java 1.8. If you have a more recent Java version installed, you’ll need to make some adjustments to the Dremio configuration files.

Lets start downloading Dremio and installing it. Dremio can be found for multiple platforms and we can download it from here.

Dremio uses Java 1.8, so if you have an early version please make sure you install java 1.8 and edit /Applications/Dremio.app/Contents/Java/dremio/conf/dremio-env to point to the directory where java 1.8 home is located.

After that you should be able to start Dremio as any other MacOs application and access http://localhost:9047

Image Description

Configuring S3 Source

Dremio can connect to relational databases (both commercial and open source), NoSQL, Hadoop, cloud storage, ElasticSearch, among others. However the scope of this post is to use a well known NoSQL storage S3 bucket (more details can be found here) and show the query capabilities of Dremio against unstructured data.

For this demo we're using Garmin CSV activity data that can be easily downloaded from Garmin activity page.

Here and example of a CSV Garmin activity. If you don't have a Garmin account you can always replicate the data above.

act,runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories
1,NMG,1,00:06:08.258,00:06:06.00,1,36,--,0:06:08  ,0:06:06  ,0:04:13  ,175.390625,193.0,92.89507499768523,--,--,--,65
1,NMG,2,00:10:26.907,00:10:09.00,1,129,--,0:10:26  ,0:10:08  ,0:06:02  ,150.140625,236.0,63.74555754497759,--,--,--,55

For user information data we have used the following dataset

runner,dob,name
JM,01-01-1900,Jon Mead
NMG,01-01-1900,Nelio Guimaraes

Add your S3 credentials to access

After configuring your S3 account all buckets associated to it, will be prompted under the new source area.

For this post I’ve created two buckets : nmgbuckettest and nmgdremiouser containing data that could be interpreted as a data mart

Image Description

nmgbuckettest - contains Garmin activity data that could be seen as a fact table in CSV format :

Act,Runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories

nmgdremiouser - contains user data that could be seen as a user dimension in a CSV format:

runner,dob,name

Creating datasets

After we add the S3 buckets we need to set up the CSV format. Dremio makes most of the work for us, however we had the need to adjust some fields, for example date formats or map a field as an integer.

By clicking on the gear icon we access the following a configuration panel where we can set the following options. Our CSV's were pretty clean so I've just change the line delimiter for \n and checked the option Extract Field Name

Lets do the same for the second set of CSV's (nmgdremiouser bucket)

Click in saving will drive us to a new panel where we can start performing some queries.

However as mentioned before at this stage we might want to adjust some fields. Right here I'll adapt the dob field from the nmgdremiouser bucket to be in the dd-mm-yyyy format.

Apply the changes and save the new dataset under the desire space.

Feel free to do the same for the nmgbuckettest CSV's. As part of my plan to make I'll call D_USER for the dataset coming from nmgdremiouser bucket and F_ACTIVITY for data coming from nmgbuckettest

Querying datasets

Now that we have D_USER and F_ACTIVITY datasets created we can start querying them and do some analysis.

This first analysis will tell us which runner climbs more during his activities:

SELECT round(nested_0.avg_elev_gain) AS avg_elev_gain, round(nested_0.max_elev_gain) AS max_elev_gain, round(nested_0.sum_elev_gain) as sum_elev_gain, join_D_USER.name AS name
FROM (
  SELECT avg_elev_gain, max_elev_gain, sum_elev_gain, runner
  FROM (
    SELECT AVG(to_number("Elevation Gain",'###')) as avg_elev_gain,
    MAX(to_number("Elevation Gain",'###')) as max_elev_gain,
    SUM(to_number("Elevation Gain",'###')) as sum_elev_gain,
    runner
    FROM dremioblogpost.F_ACTIVITY
    where "Elevation Gain" != '--'
    group by runner
  ) nested_0
) nested_0
 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner    

To enrich the example lets understand who is the fastest runner with analysis based on the total climbing

 SELECT round(nested_0.km_per_hour) AS avg_speed_km_per_hour, nested_0.total_climbing AS total_climbing_in_meters, join_D_USER.name AS name
FROM ( 
  SELECT km_per_hour, total_climbing, runner
  FROM (
    select avg(cast(3600.0/((cast(substr("Avg Moving Paces",3,2) as integer)*60)+cast(substr("Avg Moving Paces",6,2) as integer)) as float)) as km_per_hour,
        sum(cast("Elevation Gain" as integer)) total_climbing,
        runner
        from dremioblogpost.F_ACTIVITY
        where "Avg Moving Paces" != '--'
        and "Elevation Gain" != '--'
        group by runner
  ) nested_0
) nested_0
 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner

Conclusions

Dremio is an interesting tool capable of unifying existing repositories of unstructured data. Is Dremio capable of working with any volume of data and complex relationships? Well, I believe that right now the tool isn't capable of this, even with the simple and small data sets used in this example the performance was not great.

Dremio does successfully provide self service access to most platforms meaning that users don't have to move data around before being able to perform any analysis. This is probably the most exciting part of Dremio. It might well be in the paradigm of a "good enough" way to access data across multiple sources. This will allow data scientists to do analysis before the data is formally structured.

Categories: BI & Warehousing

Making our way into Dremio

Wed, 2018-07-25 04:07

In an analytics system, we typically have an Operational Data Store (ODS) or staging layer; a performance layer or some data marts; and on top, there would be an exploration or reporting tool such as Tableau or Oracle's OBIEE. This architecture can lead to latency in decision making, creating a gap between analysis and action. Data preparation tools like Dremio can address this.

Dremio is a Data-as-a-Service platform allowing users to quickly query data, directly from the source or in any layer, regardless of its size or structure. The product makes use of Apache Arrow, allowing it to virtualise data through an in-memory layer, creating what is called a Data Reflection.

The intent of this post is an introduction to Dremio; it provides a step by step guide on how to query data from Amazon's S3 platform.

I wrote this post using my MacBook Pro, Dremio is supported on MacOS. To install it, I needed to make some configuration changes due to the Java version. The latest version of Dremio uses Java 1.8. If you have a more recent Java version installed, you’ll need to make some adjustments to the Dremio configuration files.

Lets start downloading Dremio and installing it. Dremio can be found for multiple platforms and we can download it from here.

Dremio uses Java 1.8, so if you have an early version please make sure you install java 1.8 and edit /Applications/Dremio.app/Contents/Java/dremio/conf/dremio-env to point to the directory where java 1.8 home is located.

After that you should be able to start Dremio as any other MacOs application and access http://localhost:9047

Image Description

Configuring S3 Source

Dremio can connect to relational databases (both commercial and open source), NoSQL, Hadoop, cloud storage, ElasticSearch, among others. However the scope of this post is to use a well known NoSQL storage S3 bucket (more details can be found here) and show the query capabilities of Dremio against unstructured data.

For this demo we're using Garmin CSV activity data that can be easily downloaded from Garmin activity page.

Here and example of a CSV Garmin activity. If you don't have a Garmin account you can always replicate the data above.

act,runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories
1,NMG,1,00:06:08.258,00:06:06.00,1,36,--,0:06:08  ,0:06:06  ,0:04:13  ,175.390625,193.0,92.89507499768523,--,--,--,65
1,NMG,2,00:10:26.907,00:10:09.00,1,129,--,0:10:26  ,0:10:08  ,0:06:02  ,150.140625,236.0,63.74555754497759,--,--,--,55

For user information data we have used the following dataset

runner,dob,name
JM,01-01-1900,Jon Mead
NMG,01-01-1900,Nelio Guimaraes

Add your S3 credentials to access

After configuring your S3 account all buckets associated to it, will be prompted under the new source area.

For this post I’ve created two buckets : nmgbuckettest and nmgdremiouser containing data that could be interpreted as a data mart

Image Description

nmgbuckettest - contains Garmin activity data that could be seen as a fact table in CSV format :

Act,Runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories

nmgdremiouser - contains user data that could be seen as a user dimension in a CSV format:

runner,dob,name

Creating datasets

After we add the S3 buckets we need to set up the CSV format. Dremio makes most of the work for us, however we had the need to adjust some fields, for example date formats or map a field as an integer.

By clicking on the gear icon we access the following a configuration panel where we can set the following options. Our CSV's were pretty clean so I've just change the line delimiter for \n and checked the option Extract Field Name

Lets do the same for the second set of CSV's (nmgdremiouser bucket)

Click in saving will drive us to a new panel where we can start performing some queries.

However as mentioned before at this stage we might want to adjust some fields. Right here I'll adapt the dob field from the nmgdremiouser bucket to be in the dd-mm-yyyy format.

Apply the changes and save the new dataset under the desire space.

Feel free to do the same for the nmgbuckettest CSV's. As part of my plan to make I'll call D_USER for the dataset coming from nmgdremiouser bucket and F_ACTIVITY for data coming from nmgbuckettest

Querying datasets

Now that we have DUSER and FACTIVITY datasets created we can start querying them and do some analysis.

This first analysis will tell us which runner climbs more during his activities:

SELECT round(nested_0.avg_elev_gain) AS avg_elev_gain, round(nested_0.max_elev_gain) AS max_elev_gain, round(nested_0.sum_elev_gain) as sum_elev_gain, join_D_USER.name AS name
FROM (
  SELECT avg_elev_gain, max_elev_gain, sum_elev_gain, runner
  FROM (
    SELECT AVG(to_number("Elevation Gain",'###')) as avg_elev_gain,
    MAX(to_number("Elevation Gain",'###')) as max_elev_gain,
    SUM(to_number("Elevation Gain",'###')) as sum_elev_gain,
    runner
    FROM dremioblogpost.F_ACTIVITY
    where "Elevation Gain" != '--'
    group by runner
  ) nested_0
) nested_0
 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner    

To enrich the example lets understand who is the fastest runner with analysis based on the total climbing

 SELECT round(nested_0.km_per_hour) AS avg_speed_km_per_hour, nested_0.total_climbing AS total_climbing_in_meters, join_D_USER.name AS name
FROM ( 
  SELECT km_per_hour, total_climbing, runner
  FROM (
    select avg(cast(3600.0/((cast(substr("Avg Moving Paces",3,2) as integer)*60)+cast(substr("Avg Moving Paces",6,2) as integer)) as float)) as km_per_hour,
        sum(cast("Elevation Gain" as integer)) total_climbing,
        runner
        from dremioblogpost.F_ACTIVITY
        where "Avg Moving Paces" != '--'
        and "Elevation Gain" != '--'
        group by runner
  ) nested_0
) nested_0
 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner

Conclusions

Dremio is an interesting tool capable of unifying existing repositories of unstructured data. Is Dremio capable of working with any volume of data and complex relationships? Well, I believe that right now the tool isn't capable of this, even with the simple and small data sets used in this example the performance was not great.

Dremio does successfully provide self service access to most platforms meaning that users don't have to move data around before being able to perform any analysis. This is probably the most exciting part of Dremio. It might well be in the paradigm of a "good enough" way to access data across multiple sources. This will allow data scientists to do analysis before the data is formally structured.

Categories: BI & Warehousing

Oracle Analytics Cloud Workshop FAQ

Mon, 2018-07-23 07:59

A few weeks ago, I had the opportunity to present the Rittman Mead Oracle Analytics Cloud workshop in Oracle's head office in London. The aim of the workshop was to educate potential OAC customers and give them the tools and knowledge to decide whether or not OAC was the right solution for them. We had a great cross section of multiple industries (although telecoms were over represented!) and OBIEE familiarity. Together we came up with a series of questions that needed to be answered to help in the decision making process. In the coming workshops we will add more FAQ style posts to the blog to help flesh out the features of the product.

If you are interested in coming along to one of the workshops to get some hands on time with OAC, send an email to training@rittmanmead.com and we can give you the details.

Do Oracle provide a feature comparison list between OBIEE on premise and OAC?

Oracle do not provide a feature comparison between on-premise and OAC. However, Rittman Mead have done an initial comparison between OAC and traditional on-premise OBIEE 12c installations:

High Level
  • Enterprise Analytics is identical to 12c Analytics
  • Only two Actions available in OAC: Navigate to BI content, Navigate to Web page
  • BI Publisher is identical in 12c and OAC
  • Data Visualiser has additional features and a slightly different UI in OAC compared to 12c
BI Developer Client Tool for OAC
  • Looks exactly the same as the OBIEE client
  • Available only for Windows, straightforward installation
  • OAC IP address and BI Server port must be provided to create an ODBC data source
  • Allows to open and edit online the OAC model
  • Allows offline development. Snapshots interface used to upload it to OAC (it will completely replace existing model)
Data Modeler
  • Alternative tool to create and manage metadata models
  • Very easy to use, but limited compared to the BI Developer Client.
Catalog
  • It's possible to archive/unarchive catalog folders from on-premise to OAC.
BAR file
  • It's possible to create OAC bar files
  • It's possible to migrate OAC bar files to OBIEE 12c
Can you ever be charged by network usage, for example connection to an on premise data source using RDC?

Oracle will not charge you for network usage as things stand. Your charges come from the following:

  • Which version of OAC you have (Standard, Data Lake or Enterprise)
  • Whether you are using Pay-as-you-go or Monthly Commitments
  • The amount of disk space you have specified during provisioning
  • The combination of OCPU and RAM currently in use (size).
  • The up-time of your environment.

So for example an environment that has 1 OCPU with 7.5 GB RAM will cost less than an environment with 24 OCPUs with 180 GB RAM if they are up for the same amount of time, everything else being equal. This being said, there is an additional charge to the analytics license as a cloud database is required to configure and launch an analytics instance which should be taken into consideration when choosing Oracle Analytics Cloud.

Do you need to restart the OAC environment when you change the RAM and OCPU settings?

Configuring the number of OCPUs and associated RAM is done from the Analytics Service Console. This can be done during up time without a service restart, however the analytics service will be unavailable:

alt

PaaS Service Manager Command Line Interface (PSM Cli), which Francesco covered here, will allow this to be scripted and scheduled. An interesting use case for this would be to allow an increase in resources during month end processing where your concurrent users are at its highest, whilst in the quieter parts of the month you can scale back down.

This is done using the 'scale' command, this command takes a json file as a parameter which contains information about what the environment should look like. You will notice in the example below that the json file refers to an object called 'shape'; this is the combination of OCPU and RAM that you want the instance to scale to. Some examples of shapes are:

  • oc3 — 1 OCPU with 7.5 GB RAM
  • oc4 — 2 OCPUs with 15 GB RAM
  • oc5 — 4 OCPUs with 30 GB RAM
  • oc6 — 8 OCPUs with 60 GB RAM
  • oc7 — 16 OCPUs with 120 GB RAM
  • oc8 — 24 OCPUs with 180 GB RAM
  • oc9 — 32 OCPUs with 240 GB RAM

For example:

The following example scales the rittmanmead-analytics-prod service to the oc9 shape.

$ psm analytics scale -s rittmanmead-analytics-prod -c ~/oac-obiee/scale-to-monthend.json
where the JSON file contains the following:

{ "components" : { "BI" : "shape" : "oc9", "hosts":["rittmanmead-prod-1"] } } }

Oracle supply documentation for the commands required here: https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-scale2.html .

How is high availability provisioned in Oracle Analytics Cloud?

Building a high available infrastructure in the cloud needs to take into consideration three main areas:

Server Failure: Oracle Analytics Cloud can be clustered, additional nodes (up to 10) can be added dynamically in the Cloud 'My Services' console should they need to be:

alt

It is also possible to provision a load balancer, as you can see from the screenshot below:

alt

Zone Failure: Sometimes it more than just a single server that causes the failure. Cloud architecture is built in server farms, which themselves can be network issues, power failures and weather anomalies. Oracle Analytics Cloud allows you to create an instance in a region, much like Amazons "availability zone". A sensible precaution would be to create a disaster recover environment in different region to your main prod environment, to help reduce costs this can be provisioned on the Pay-as-you-go license model and therefore only be chargeable when its being used.

Cloud Failure: Although rare, sometimes the cloud platform can fail. For example both your data centres that you have chosen to counter the previous point could be victim to a weather anomaly. Oracle Analytics Cloud allows you to take regular backups of your reports, dashboards and metadata which can be downloaded and stored off-cloud and re implemented in another 12c Environment.

In addition to these points, its advisable to automate and test everything. Oracle supply a very handy set of scripts and API called PaaS Service Manager Command Line Interface (PSM Cli) which can be used to achieve this. For example it can be used to automate backups, set up monitoring and alerting and finally and arguably most importantly it can be used to test your DR and HA infrastructure.

Can you push the user credentials down to the database?

At this point in time there is no way to configure database authentication providers in a similar way to Weblogic providors of the past. However, Oracle IDCS does have a REST API that could be used to simulate this functionality, documentation can be found here: https://docs.oracle.com/en/cloud/paas/identity-cloud/rest-api/OATOAuthClientWebApp.html

You can store user group memberships in a database and for your service’s authentication provider to access this information when authenticating a user's identity. You can use the script configure_bi_sql_group_provider to set up the provider and create the tables that you need (GROUPS and GROUPMEMBERS). After you run the script, you must populate the tables with your group and group member (user) information.

Group memberships that you derive from the SQL provider don't show up in the Users and Roles page in Oracle Analytics Cloud Console as you might expect but the member assignments work correctly.

alt

These tables are in the Oracle Database Cloud Service you configured for Oracle Analytics Cloud and in the schema created for your service. Unlike the on-premises equivalent functionality, you can’t change the location of these tables or the SQL that retrieves the results.
The script to achieve this is stored on the analytics server itself, and can be accessed using SSH (using the user 'opc') and the private keys that you created during the instance provisioning process. They are stored in: /bi/app/public/bin/configure_bi_sql_group_provider

Can you implement SSL certificates in Oracle Analytics Cloud?

The short answer is yes.

When Oracle Analytics Cloud instances are created, similarly to on-premise OBIEE instances, a a self-signed certificate is generated. The self-signed certificate is intended to be temporary and you must replace it with a new private key and a certificate signed by a certification authority. Doc ID 2334800.1 on support.oracle.com has the full details on how to implement this, but the high level steps (take from the document itself) are:

  • Associate a custom domain name against the public ip of your OAC instance
  • Get the custom SSL certificate from a Certificate Authority
  • Specify the DNS registered host name that you want to secure with SSL in servername.conf
  • Install Intermediate certificateRun the script to Register the new private key and server certificate
Can you implement Single Sign On (SSO) in Oracle Analytics Cloud?

Oracle Identity Cloud Service (IDCS) allows administrators to create security providors for OAC, much like the providors in on premise OBIEE weblogic providors. These can be created/edited to include single sign on URLs,Certificates etc, as shown in the screenshot below:

alt

Oracle support Doc ID 2399789.1 covers this in detail between Microsoft Azure AD and OAC, and is well worth the read.

Are RPD files (BAR files) backwards compatible?

This would depend what has changed between the releases. The different version numbers of OAC doesn't necessarily include changes to the OBIEE components themselves (e.g. it could just be an improvement to the 'My Services' UI). However, if there have been changes to the way the XML is formed in reports for example, these wont be compatible with different previous versions of the catalog. This all being said, the environments look like they can be upgraded at any time so you should be able to take a snapshot of your environment and upgrade it to match the newer version and then redeploy/refresh from your snapshot

How do you connect securely to AWS?

There doesn't seem to be any documentation on how exactly Visual Analyzer connects to Amazon Redshift using the 'Create Connection' wizard. However, there is an option to create an SSL ODBC connection to the Redshift database that can then be used to connect using the Visual Analyzer ODBC connection wizard:

alt

Can you still edit instanceconfig and nqsconfig files?

Yes you can, you need to use your ssh keys to sign into the box (using the user 'opc'). They are contained in the following locations:

/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIPS/instanceconfig.xml

/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI

Its also worth mentioning that there is a guide here which explains where the responsibility lies should anything break during customisations of the platform.

Who is responsible for what regarding support?

Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services (Doc ID 2309936.1)

Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services

Categories: BI & Warehousing

Oracle Analytics Cloud Workshop FAQ

Mon, 2018-07-23 07:59

A few weeks ago, I had the opportunity to present the Rittman Mead Oracle Analytics Cloud workshop in Oracle's head office in London. The aim of the workshop was to educate potential OAC customers and give them the tools and knowledge to decide whether or not OAC was the right solution for them. We had a great cross section of multiple industries (although telecoms were over represented!) and OBIEE familiarity. Together we came up with a series of questions that needed to be answered to help in the decision making process. In the coming workshops we will add more FAQ style posts to the blog to help flesh out the features of the product.

If you are interested in coming along to one of the workshops to get some hands on time with OAC, send an email to training@rittmanmead.com and we can give you the details.

Do Oracle provide a feature comparison list between OBIEE on premise and OAC?

Oracle do not provide a feature comparison between on-premise and OAC. However, Rittman Mead have done an initial comparison between OAC and traditional on-premise OBIEE 12c installations:

High Level
  • Enterprise Analytics is identical to 12c Analytics
  • Only two Actions available in OAC: Navigate to BI content, Navigate to Web page
  • BI Publisher is identical in 12c and OAC
  • Data Visualiser has additional features and a slightly different UI in OAC compared to 12c
BI Developer Client Tool for OAC
  • Looks exactly the same as the OBIEE client
  • Available only for Windows, straightforward installation
  • OAC IP address and BI Server port must be provided to create an ODBC data source
  • Allows to open and edit online the OAC model
  • Allows offline development. Snapshots interface used to upload it to OAC (it will completely replace existing model)
Data Modeler
  • Alternative tool to create and manage metadata models
  • Very easy to use, but limited compared to the BI Developer Client.
Catalog
  • It's possible to archive/unarchive catalog folders from on-premise to OAC.
BAR file
  • It's possible to create OAC bar files
  • It's possible to migrate OAC bar files to OBIEE 12c
Can you ever be charged by network usage, for example connection to an on premise data source using RDC?

Oracle will not charge you for network usage as things stand. Your charges come from the following:

  • Which version of OAC you have (Standard, Data Lake or Enterprise)
  • Whether you are using Pay-as-you-go or Monthly Commitments
  • The amount of disk space you have specified during provisioning
  • The combination of OCPU and RAM currently in use (size).
  • The up-time of your environment.

So for example an environment that has 1 OCPU with 7.5 GB RAM will cost less than an environment with 24 OCPUs with 180 GB RAM if they are up for the same amount of time, everything else being equal. This being said, there is an additional charge to the analytics license as a cloud database is required to configure and launch an analytics instance which should be taken into consideration when choosing Oracle Analytics Cloud.

Do you need to restart the OAC environment when you change the RAM and OCPU settings?

Configuring the number of OCPUs and associated RAM is done from the Analytics Service Console. This can be done during up time without a service restart, however the analytics service will be unavailable:

alt

PaaS Service Manager Command Line Interface (PSM Cli), which Francesco covered here, will allow this to be scripted and scheduled. An interesting use case for this would be to allow an increase in resources during month end processing where your concurrent users are at its highest, whilst in the quieter parts of the month you can scale back down.

This is done using the 'scale' command, this command takes a json file as a parameter which contains information about what the environment should look like. You will notice in the example below that the json file refers to an object called 'shape'; this is the combination of OCPU and RAM that you want the instance to scale to. Some examples of shapes are:

  • oc3 — 1 OCPU with 7.5 GB RAM
  • oc4 — 2 OCPUs with 15 GB RAM
  • oc5 — 4 OCPUs with 30 GB RAM
  • oc6 — 8 OCPUs with 60 GB RAM
  • oc7 — 16 OCPUs with 120 GB RAM
  • oc8 — 24 OCPUs with 180 GB RAM
  • oc9 — 32 OCPUs with 240 GB RAM

For example:

The following example scales the rittmanmead-analytics-prod service to the oc9 shape.

$ psm analytics scale -s rittmanmead-analytics-prod -c ~/oac-obiee/scale-to-monthend.json where the JSON file contains the following:

{ "components" : { "BI" : "shape" : "oc9", "hosts":["rittmanmead-prod-1"] } } }

Oracle supply documentation for the commands required here: https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-scale2.html .

How is high availability provisioned in Oracle Analytics Cloud?

Building a high available infrastructure in the cloud needs to take into consideration three main areas:

Server Failure: Oracle Analytics Cloud can be clustered, additional nodes (up to 10) can be added dynamically in the Cloud 'My Services' console should they need to be:

alt

It is also possible to provision a load balancer, as you can see from the screenshot below: 

alt

Zone Failure: Sometimes it more than just a single server that causes the failure. Cloud architecture is built in server farms, which themselves can be network issues, power failures and weather anomalies. Oracle Analytics Cloud allows you to create an instance in a region, much like Amazons "availability zone". A sensible precaution would be to create a disaster recover environment in different region to your main prod environment, to help reduce costs this can be provisioned on the Pay-as-you-go license model and therefore only be chargeable when its being used.

Cloud Failure: Although rare, sometimes the cloud platform can fail. For example both your data centres that you have chosen to counter the previous point could be victim to a weather anomaly. Oracle Analytics Cloud allows you to take regular backups of your reports, dashboards and metadata which can be downloaded and stored off-cloud and re implemented in another 12c Environment. 

In addition to these points, its advisable to automate and test everything. Oracle supply a very handy set of scripts and API called PaaS Service Manager Command Line Interface (PSM Cli) which can be used to achieve this. For example it can be used to automate backups, set up monitoring and alerting and finally and arguably most importantly it can be used to test your DR and HA infrastructure.

Can you push the user credentials down to the database?

At this point in time there is no way to configure database authentication providers in a similar way to Weblogic providors of the past. However, Oracle IDCS does have a REST API that could be used to simulate this functionality, documentation can be found here: https://docs.oracle.com/en/cloud/paas/identity-cloud/rest-api/OATOAuthClientWebApp.html

You can store user group memberships in a database and for your service’s authentication provider to access this information when authenticating a user's identity. You can use the script configure_bi_sql_group_provider to set up the provider and create the tables that you need (GROUPS and GROUPMEMBERS). After you run the script, you must populate the tables with your group and group member (user) information.

Group memberships that you derive from the SQL provider don't show up in the Users and Roles page in Oracle Analytics Cloud Console as you might expect but the member assignments work correctly.

alt

These tables are in the Oracle Database Cloud Service you configured for Oracle Analytics Cloud and in the schema created for your service. Unlike the on-premises equivalent functionality, you can’t change the location of these tables or the SQL that retrieves the results.
The script to achieve this is stored on the analytics server itself, and can be accessed using SSH (using the user 'opc') and the private keys that you created during the instance provisioning process. They are stored in: /bi/app/public/bin/configurebisqlgroupprovider

Can you implement SSL certificates in Oracle Analytics Cloud?

The short answer is yes.

When Oracle Analytics Cloud instances are created, similarly to on-premise OBIEE instances, a a self-signed certificate is generated. The self-signed certificate is intended to be temporary and you must replace it with a new private key and a certificate signed by a certification authority. Doc ID 2334800.1 on support.oracle.com has the full details on how to implement this, but the high level steps (take from the document itself) are:

  • Associate a custom domain name against the public ip of your OAC instance
  • Get the custom SSL certificate from a Certificate Authority
  • Specify the DNS registered host name that you want to secure with SSL in servername.conf
  • Install Intermediate certificateRun the script to Register the new private key and server certificate
Can you implement Single Sign On (SSO) in Oracle Analytics Cloud?

Oracle Identity Cloud Service (IDCS) allows administrators to create security providors for OAC, much like the providors in on premise OBIEE weblogic providors. These can be created/edited to include single sign on URLs,Certificates etc, as shown in the screenshot below:

alt

Oracle support Doc ID 2399789.1 covers this in detail between Microsoft Azure AD and OAC, and is well worth the read.

Are RPD files (BAR files) backwards compatible?

This would depend what has changed between the releases. The different version numbers of OAC doesn't necessarily include changes to the OBIEE components themselves (e.g. it could just be an improvement to the 'My Services' UI). However, if there have been changes to the way the XML is formed in reports for example, these wont be compatible with different previous versions of the catalog. This all being said, the environments look like they can be upgraded at any time so you should be able to take a snapshot of your environment and upgrade it to match the newer version and then redeploy/refresh from your snapshot

How do you connect securely to AWS?

There doesn't seem to be any documentation on how exactly Visual Analyzer connects to Amazon Redshift using the 'Create Connection' wizard. However, there is an option to create an SSL ODBC connection to the Redshift database that can then be used to connect using the Visual Analyzer ODBC connection wizard:

alt

Can you still edit instanceconfig and nqsconfig files?

Yes you can, you need to use your ssh keys to sign into the box (using the user 'opc'). They are contained in the following locations:

/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIPS/instanceconfig.xml

/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI

Its also worth mentioning that there is a guide here which explains where the responsibility lies should anything break during customisations of the platform.

Who is responsible for what regarding support?

Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services (Doc ID 2309936.1)

Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services

Categories: BI & Warehousing

DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli

Mon, 2018-06-25 10:18
 Scripting Oracle Cloud Instance Management with PSM Cli

This summer we unselfish Italians decided to not participate to the World Cup to give another country the opportunity to win (good luck with that England!). This decision, which I strongly support, gives me lot of time for blogging!
 Scripting Oracle Cloud Instance Management with PSM Cli

As already written, two weeks ago while in Orlando for Kscope18, I presented a session about DevOps and OBIEE focusing on how to properly source control, promote and test for regression any component of the infrastructure.

 Scripting Oracle Cloud Instance Management with PSM Cli

Development Isolation

One key aspect of DevOps is providing the Development Isolation: a way of allowing multiple development streams to work independently and merging the outcome of the process into the main environment only after this has been tested and validated. This is needed to avoid the standard situation where code promotions are blocked due to different working streams not being in sync: forcing a team to postpone a code release just because another team doesn't have the UAT OK is just an example of non-isolated development platforms.

 Scripting Oracle Cloud Instance Management with PSM Cli

We have been discussing development isolation topic in the past focusing mainly on concurrent repository development and how to integrate it with versioning tools like Git and SVN. The concurrent online editing option is not viable since multiple developers are modifying the same artifact (RPD) without a way of testing for regression the changes or to verifying that what has been done is correct before merging the changes in the RPD.
Alternative solutions of using MUDE (default multi-user development method provided by the Admintool) or pure offline RPD work encounter the same problems defined above: no feature or regression testing available before merging the RPD in the main development environment.

Different RPD development techniques solve only partially the problem: almost any OAC/OBIEE development consist at least in both RPD and catalog work (creation of analysis/dashboards/VA projects) we need an approach which provides Development Isolation at both levels. The solution, in order to properly build a DevOps framework around OAC/OBIEE, it's to provide isolated feature-related full OBIEE instances where the RPD can be edited in online mode, the catalog work can be done independently, and the overall result can be tested and validated before being merged into the common development environment.

Feature-Related Instances

The feature instances, as described above, need to be full OAC/OBIEE development instances where only a feature (or a small set) is worked at the time in order to give the agility to developers to release the code as soon as it's ready and tested. In the on-premises world this can "easily" be achieved by providing a number of dedicated Virtual Machines or, more in line with the recent trends, an automated instance provisioning with Docker using a template image like the one built by our previous colleague Gianni Ceresa.

 Scripting Oracle Cloud Instance Management with PSM Cli

However, when we think about Oracle Analytics Cloud (OAC), we seem to have two problems:

  • There is a cost associated with every instance, thus minimizing the number of instances and the uptime is necessary
  • The OAC provisioning interface is point and click, thus automating the instance management seems impossible

The overall OAC instance cost can be mitigated by the Bring Your Own License (BYOL) licensing method which allows customers to migrate on-premises licenses to the cloud and have discounted prices on the hourly/monthly instance cost (more details here). However, since the target is to minimize the cost thus the # of instances and the uptime, we need to find a way to do so that doesn't rely on a human and a point and click interface. Luckily the PaaS Service Manager Command Line Interface (PSM Cli) allows us to solve this problem by providing a scriptable way of creating, starting and stopping instances.

PaaS Service Manager Command Line Interface

PSMCLI is a command line interface acting as a wrapper over the PaaS REST APIs. Its usage is not limited to OAC, the same interface can be used to create and manage instances of the Oracle's Database Cloud Service or Java Cloud Services amongst the others.
When talking about OAC please keep in mind that, as of now, PSM Cli works only with the non-autonomous version but I believe the Autonomous support will be added soon.

Installing and Configuring PSM Cli

PSMCLI has two prerequisites before it can be installed:

  • cURL - a command line utility to transfer data with URLs
  • Python 3.3 or later

Once both prerequisites are installed PSM can easily be downloaded with the following cURL call

curl -X GET -u <USER>:<PWD> -H X-ID-TENANT-NAME:<IDENTITY_DOMAIN> https://<REST_SERVER>/paas/core/api/v1.1/cli/<IDENTITY_DOMAIN>/client -o psmcli.zip

Where

  • <USER> and <PWD> are the credentials
  • <IDENTITY_DOMAIN> is the Identity Domain ID specified during the account creation
  • <REST_SERVER> is the REST API server name which is:
  • psm.us.oraclecloud.com if you are using a US datacenter
  • psm.aucom.oraclecloud.com if you are in the AuCom region
  • psm.europe.oraclecloud.com otherwise

Next step is to install PSM as a Python package with

pip3 install -U psmcli.zip

After the installation is time for configuration

psm setup

The configuration command will request the following information:

  • Oracle Cloud Username and Password
  • Identity Domain
  • Region, this need to be set to
  • emea if the REST_SERVER mentioned above contains emea
  • aucom if the REST_SERVER mentioned above contains aucom
  • us otherwise
  • Output format: the choice is between short, json and html
  • OAuth: the communication between the CLI and the REST API can use basic authentication (flag n) or OAuth (flag y). If OAuth is chosen then ClientID, Secret and Access Token need to be specified

A JSON profile file can also be used to provide the same information mentioned above. The structure of the file is the following

{ 
    "username":"<USER>",
    "password":"<PASSWORD>",
    "identityDomain":"<IDENTITY_DOMAIN>",
    "region":"<REGION>",
    "outputFormat":"<OUTPUT_FORMAT>",
    "oAuth":{ 
        "clientId":"",
        "clientSecret":"",
        "accessTokenServer":""
    }
}

If the profile is stored in a file profile.json the PSM configuration can be achieved by just executing

psm setup -c profile.json

One quick note: the identity domain Id, shown in the Oracle Cloud header, isn't working if it's not the original name (name at the time of the creation).

 Scripting Oracle Cloud Instance Management with PSM Cli

In order to get the correct identity domain Id to use, check in an Oracle Cloud instance already created (e.g. a database one) and check the Details, you'll see the original identity domain listed there (credits to Pieter Van Puymbroeck).

 Scripting Oracle Cloud Instance Management with PSM Cli

Working With PSM Cli

Once the PSM has been correctly configured it's time to start checking what options are available, for a detailed list of the options check PSM documentation.

The PSM commands are product related, so each command is in the form:

psm <product> <command> <parameters>

Where

  • product is the Oracle cloud product e.g. dbcs, analytics, BigDataAppliance, for a complete list use psm help
  • command is the action to be executed against the product e.g. services, stop, start, create-service
  • parameters is the list of parameters to pass depending on the command executed

The first step is to check what instances I already created, I can do so for the database by executing

psm dbcs services

which, as expected, will list all my active instances

 Scripting Oracle Cloud Instance Management with PSM Cli

I can then start and stop it using:

psm dbcs start/stop/restart -s <INSTANCE_NAME>

Which in my example provides the Id of the Job assigned to the stop operation.

 Scripting Oracle Cloud Instance Management with PSM Cli

When I check the status via the service command I get Maintenance like in the web UI.

 Scripting Oracle Cloud Instance Management with PSM Cli

The same applies to the start and restart operation. Please keep in mind that all the calls are asynchronous -> the command will call the related REST API and then return the associated Job ID without waiting for the command to be finished. The status of a job can be checked with:

psm dbcs operation-status -j <JOB_ID>

 Scripting Oracle Cloud Instance Management with PSM Cli

The same operations described above are available on OAC with the same commands by simply changing the product from dbcs to analytics like:

psm analytics start/stop/restart -s <INSTANCE_NAME>

On top of the basic operation, PSM Cli allows also the following:

  • Service Instance: start/stop/restart, instance creation-deletion
  • Access Control: lists, creates, deletes, enables and disables access rules for a service.
  • Scaling: changes the computer shape of an instance and allows scaling up/down.
  • Storage: extends the storage associated to OAC
  • Backup Configuration: updates/shows the backup configurations
  • Backups: lists, creates, deletes backups of the instance
  • Restore: restores a backup giving detailed information about it and the history of Restores
  • Patches: allows patching, rollbacking, doing pre-checks, and retrieving patching history
Creating an OAC Instance

So far we discussed the maintenance on already created instances with start/stop/restart commands, but PSM Cli allows also the creation of an instance via command line. The call is pretty simple:

psm analytics create-service -c <CONFIG_FILE> -of <OUTPUT_FORMAT>

Where

  • CONFIG_FILE: is the file defining all OAC instance configurations
  • OUTPUT_FORMAT: is the desired output format between short, json and html

The question now is:

How do I create a Config File for OAC?

The documentation doesn't provide any help on this, but we can use the same approach as for on-premises OBIEE and response file: create the first instance with the Web-UI, save the payload for future use and change parameters when necessary.

 Scripting Oracle Cloud Instance Management with PSM Cli

On the Confirm screen, there is the option to Download the REST payload in JSON format

 Scripting Oracle Cloud Instance Management with PSM Cli

With the resulting json Config File being

{
  "edition": "<EDITION>",
  "vmPublicKeyText": "<SSH_TOKEN>",
  "enableNotification": "true",
  "notificationEmail": "<EMAIL>",
  "serviceVersion": "<VERSION>",
  "isBYOL": "false",
  "components": {
    "BI": {
      "adminUserPassword": "<ADMINPWD>",
      "adminUserName": "<ADMINUSER>",
      "analyticsStoragePassword": "<PWD>",
      "shape": "oc3",
      "createAnalyticsStorageContainer": "true",
      "profile_essbase": "false",
      "dbcsPassword": "<DBCSPWD>",
      "totalAnalyticsStorage": "280.0",
      "profile_bi": "true",
      "profile_dv_forced": "true",
      "analyticsStorageUser": "<EMAIL>",
      "dbcsUserName": "<DBUSER>",
      "dbcsPDBName": "<PDBNAME>",
      "dbcsName": "<DBCSNAME>",
      "idcs_enabled": "false",
      "analyticsStorageContainerURL": "<STORAGEURL>",
      "publicStorageEnabled": "false",
      "usableAnalyticsStorage": "180"
    }
  },
  "serviceLevel": "PAAS",
  "meteringFrequency": "HOURLY",
  "subscriptionId": "<SUBSCRIPTIONID>",
  "serviceName": "<SERVICENAME>"
}

This file can be stored and the parameters changed as necessary to create new OAC instances with the command:

psm analytics create-service -c <JSON_PAYLOAD_FILE> -of short/json/html

As shown previously, the result of the call is a Job Id that can be monitored with

psm analytics operation-status -j <JOB_ID>

Once the Job is finished successfully, the OAC instance is ready to be used. If at a certain point, the OAC instance is not needed anymore, it can be deleted via:

psm analytics delete-service -s <SERVICE_NAME> -n <DBA_NAME> -p <DBA_PWD>

Where

  • SERVICE_NAME is the OAC instance name
  • DBA_NAME and DBA_PWD are the DBA credentials where OAC schemas are residing
Summary

Worried about providing development isolation in OAC while keeping the costs down? Not anymore! With PSM Cli you now have a way of creating instances on demand, start/stop, up/down scaling with a command line tool easily integrable with automation tools like Jenkins.

Create an OAC instances automatically only when features need to be developed or tested, stop&start the instances based on your workforce timetables, take the benefit of the cloud minimizing the cost associated to it by using PSM Cli!

One last note; for a full DevOps OAC implementation, PSM Cli is not sufficient: tasks like automated regression testing, code versioning, and promotion can't be managed directly with PSM Cli but require usage of external toolsets like Rittman Mead BI Developer Toolkit. If you are interested in a full DevOps implementation on OAC and understanding the details on how PSM Cli can be used in conjunction with Rittman Mead BI Developer Toolkit don't hesitate to contact us!

Categories: BI & Warehousing

DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli

Mon, 2018-06-25 10:18
 Scripting Oracle Cloud Instance Management with PSM Cli

This summer we unselfish Italians decided to not participate to the World Cup to give another country the opportunity to win (good luck with that England!). This decision, which I strongly support, gives me lot of time for blogging!
 Scripting Oracle Cloud Instance Management with PSM Cli

As already written, two weeks ago while in Orlando for Kscope18, I presented a session about DevOps and OBIEE focusing on how to properly source control, promote and test for regression any component of the infrastructure.

 Scripting Oracle Cloud Instance Management with PSM Cli

Development Isolation

One key aspect of DevOps is providing the Development Isolation: a way of allowing multiple development streams to work independently and merging the outcome of the process into the main environment only after this has been tested and validated. This is needed to avoid the standard situation where code promotions are blocked due to different working streams not being in sync: forcing a team to postpone a code release just because another team doesn't have the UAT OK is just an example of non-isolated development platforms.

 Scripting Oracle Cloud Instance Management with PSM Cli

We have been discussing development isolation topic in the past focusing mainly on concurrent repository development and how to integrate it with versioning tools like Git and SVN. The concurrent online editing option is not viable since multiple developers are modifying the same artifact (RPD) without a way of testing for regression the changes or to verifying that what has been done is correct before merging the changes in the RPD.
Alternative solutions of using MUDE (default multi-user development method provided by the Admintool) or pure offline RPD work encounter the same problems defined above: no feature or regression testing available before merging the RPD in the main development environment.

Different RPD development techniques solve only partially the problem: almost any OAC/OBIEE development consist at least in both RPD and catalog work (creation of analysis/dashboards/VA projects) we need an approach which provides Development Isolation at both levels. The solution, in order to properly build a DevOps framework around OAC/OBIEE, it's to provide isolated feature-related full OBIEE instances where the RPD can be edited in online mode, the catalog work can be done independently, and the overall result can be tested and validated before being merged into the common development environment.

Feature-Related Instances

The feature instances, as described above, need to be full OAC/OBIEE development instances where only a feature (or a small set) is worked at the time in order to give the agility to developers to release the code as soon as it's ready and tested. In the on-premises world this can "easily" be achieved by providing a number of dedicated Virtual Machines or, more in line with the recent trends, an automated instance provisioning with Docker using a template image like the one built by our previous colleague Gianni Ceresa.

 Scripting Oracle Cloud Instance Management with PSM Cli

However, when we think about Oracle Analytics Cloud (OAC), we seem to have two problems:

  • There is a cost associated with every instance, thus minimizing the number of instances and the uptime is necessary
  • The OAC provisioning interface is point and click, thus automating the instance management seems impossible

The overall OAC instance cost can be mitigated by the Bring Your Own License (BYOL) licensing method which allows customers to migrate on-premises licenses to the cloud and have discounted prices on the hourly/monthly instance cost (more details here). However, since the target is to minimize the cost thus the # of instances and the uptime, we need to find a way to do so that doesn't rely on a human and a point and click interface. Luckily the PaaS Service Manager Command Line Interface (PSM Cli) allows us to solve this problem by providing a scriptable way of creating, starting and stopping instances.

PaaS Service Manager Command Line Interface

PSMCLI is a command line interface acting as a wrapper over the PaaS REST APIs. Its usage is not limited to OAC, the same interface can be used to create and manage instances of the Oracle's Database Cloud Service or Java Cloud Services amongst the others.
When talking about OAC please keep in mind that, as of now, PSM Cli works only with the non-autonomous version but I believe the Autonomous support will be added soon.

Installing and Configuring PSM Cli

PSMCLI has two prerequisites before it can be installed:

  • cURL - a command line utility to transfer data with URLs
  • Python 3.3 or later

Once both prerequisites are installed PSM can easily be downloaded with the following cURL call

curl -X GET -u <USER>:<PWD> -H X-ID-TENANT-NAME:<IDENTITY_DOMAIN> https://<REST_SERVER>/paas/core/api/v1.1/cli/<IDENTITY_DOMAIN>/client -o psmcli.zip

Where

  • <USER> and <PWD> are the credentials
  • <IDENTITY_DOMAIN> is the Identity Domain ID specified during the account creation
  • <REST_SERVER> is the REST API server name which is:
    • psm.us.oraclecloud.com if you are using a US datacenter
    • psm.aucom.oraclecloud.com if you are in the AuCom region
    • psm.europe.oraclecloud.com otherwise

Next step is to install PSM as a Python package with

pip3 install -U psmcli.zip  

After the installation is time for configuration

psm setup  

The configuration command will request the following information:

  • Oracle Cloud Username and Password
  • Identity Domain
  • Region, this need to be set to
    • emea if the RESTSERVER mentioned above contains emea
    • aucom if the RESTSERVER mentioned above contains aucom
    • us otherwise
  • Output format: the choice is between short, json and html
  • OAuth: the communication between the CLI and the REST API can use basic authentication (flag n) or OAuth (flag y). If OAuth is chosen then ClientID, Secret and Access Token need to be specified

A JSON profile file can also be used to provide the same information mentioned above. The structure of the file is the following

{ 
    "username":"<USER>",
    "password":"<PASSWORD>",
    "identityDomain":"<IDENTITY_DOMAIN>",
    "region":"<REGION>",
    "outputFormat":"<OUTPUT_FORMAT>",
    "oAuth":{ 
        "clientId":"",
        "clientSecret":"",
        "accessTokenServer":""
    }
}

If the profile is stored in a file profile.json the PSM configuration can be achieved by just executing

psm setup -c profile.json  

One quick note: the identity domain Id, shown in the Oracle Cloud header, isn't working if it's not the original name (name at the time of the creation).

 Scripting Oracle Cloud Instance Management with PSM Cli

In order to get the correct identity domain Id to use, check in an Oracle Cloud instance already created (e.g. a database one) and check the Details, you'll see the original identity domain listed there (credits to Pieter Van Puymbroeck).

 Scripting Oracle Cloud Instance Management with PSM Cli

Working With PSM Cli

Once the PSM has been correctly configured it's time to start checking what options are available, for a detailed list of the options check PSM documentation.

The PSM commands are product related, so each command is in the form:

psm <product> <command> <parameters>  

Where

  • product is the Oracle cloud product e.g. dbcs, analytics, BigDataAppliance, for a complete list use psm help
  • command is the action to be executed against the product e.g. services, stop, start, create-service
  • parameters is the list of parameters to pass depending on the command executed

The first step is to check what instances I already created, I can do so for the database by executing

psm dbcs services  

which, as expected, will list all my active instances

 Scripting Oracle Cloud Instance Management with PSM Cli

I can then start and stop it using:

psm dbcs start/stop/restart -s <INSTANCE_NAME>  

Which in my example provides the Id of the Job assigned to the stop operation.

 Scripting Oracle Cloud Instance Management with PSM Cli

When I check the status via the service command I get Maintenance like in the web UI.

 Scripting Oracle Cloud Instance Management with PSM Cli

The same applies to the start and restart operation. Please keep in mind that all the calls are asynchronous -> the command will call the related REST API and then return the associated Job ID without waiting for the command to be finished. The status of a job can be checked with:

psm dbcs operation-status -j <JOB_ID>  

 Scripting Oracle Cloud Instance Management with PSM Cli

The same operations described above are available on OAC with the same commands by simply changing the product from dbcs to analytics like:

psm analytics start/stop/restart -s <INSTANCE_NAME>  

On top of the basic operation, PSM Cli allows also the following:

  • Service Instance: start/stop/restart, instance creation-deletion
  • Access Control: lists, creates, deletes, enables and disables access rules for a service.
  • Scaling: changes the computer shape of an instance and allows scaling up/down.
  • Storage: extends the storage associated to OAC
  • Backup Configuration: updates/shows the backup configurations
  • Backups: lists, creates, deletes backups of the instance
  • Restore: restores a backup giving detailed information about it and the history of Restores
  • Patches: allows patching, rollbacking, doing pre-checks, and retrieving patching history
Creating an OAC Instance

So far we discussed the maintenance on already created instances with start/stop/restart commands, but PSM Cli allows also the creation of an instance via command line. The call is pretty simple:

psm analytics create-service -c <CONFIG_FILE> -of <OUTPUT_FORMAT>  

Where

  • CONFIG_FILE: is the file defining all OAC instance configurations
  • OUTPUT_FORMAT: is the desired output format between short, json and html

The question now is:

How do I create a Config File for OAC?

The documentation doesn't provide any help on this, but we can use the same approach as for on-premises OBIEE and response file: create the first instance with the Web-UI, save the payload for future use and change parameters when necessary.

 Scripting Oracle Cloud Instance Management with PSM Cli

On the Confirm screen, there is the option to Download the REST payload in JSON format

 Scripting Oracle Cloud Instance Management with PSM Cli

With the resulting json Config File being

{
  "edition": "<EDITION>",
  "vmPublicKeyText": "<SSH_TOKEN>",
  "enableNotification": "true",
  "notificationEmail": "<EMAIL>",
  "serviceVersion": "<VERSION>",
  "isBYOL": "false",
  "components": {
    "BI": {
      "adminUserPassword": "<ADMINPWD>",
      "adminUserName": "<ADMINUSER>",
      "analyticsStoragePassword": "<PWD>",
      "shape": "oc3",
      "createAnalyticsStorageContainer": "true",
      "profile_essbase": "false",
      "dbcsPassword": "<DBCSPWD>",
      "totalAnalyticsStorage": "280.0",
      "profile_bi": "true",
      "profile_dv_forced": "true",
      "analyticsStorageUser": "<EMAIL>",
      "dbcsUserName": "<DBUSER>",
      "dbcsPDBName": "<PDBNAME>",
      "dbcsName": "<DBCSNAME>",
      "idcs_enabled": "false",
      "analyticsStorageContainerURL": "<STORAGEURL>",
      "publicStorageEnabled": "false",
      "usableAnalyticsStorage": "180"
    }
  },
  "serviceLevel": "PAAS",
  "meteringFrequency": "HOURLY",
  "subscriptionId": "<SUBSCRIPTIONID>",
  "serviceName": "<SERVICENAME>"
}

This file can be stored and the parameters changed as necessary to create new OAC instances with the command:

psm analytics create-service -c <JSON_PAYLOAD_FILE> -of short/json/html  

As shown previously, the result of the call is a Job Id that can be monitored with

psm analytics operation-status -j <JOB_ID>  

Once the Job is finished successfully, the OAC instance is ready to be used. If at a certain point, the OAC instance is not needed anymore, it can be deleted via:

psm analytics delete-service -s <SERVICE_NAME> -n <DBA_NAME> -p <DBA_PWD>  

Where

  • SERVICE_NAME is the OAC instance name
  • DBA_NAME and DBA_PWD are the DBA credentials where OAC schemas are residing
Summary

Worried about providing development isolation in OAC while keeping the costs down? Not anymore! With PSM Cli you now have a way of creating instances on demand, start/stop, up/down scaling with a command line tool easily integrable with automation tools like Jenkins.

Create an OAC instances automatically only when features need to be developed or tested, stop&start the instances based on your workforce timetables, take the benefit of the cloud minimizing the cost associated to it by using PSM Cli!

One last note; for a full DevOps OAC implementation, PSM Cli is not sufficient: tasks like automated regression testing, code versioning, and promotion can't be managed directly with PSM Cli but require usage of external toolsets like Rittman Mead BI Developer Toolkit. If you are interested in a full DevOps implementation on OAC and understanding the details on how PSM Cli can be used in conjunction with Rittman Mead BI Developer Toolkit don't hesitate to contact us!

Categories: BI & Warehousing

Pages