OrientDB May 2017 Latest News – Case studies, releases, partnerships and more

In case you missed some of the latest news and haven’t subscribed to our newsletter below, here’s a quick recap of this month’s news. Find out how OrientDB can help your startup company with our latest case study and how companies across the world are using graph technology to secure their systems. Stay up-to date with our latest stable release or test the new features in our 3.0 milestones release.

Case Studies

newsc_logo_bl@4xIf you want to learn about how OrientDB can help your startup company, take a look at our latest case studies. Find out how New.sc uses graphs and multi-model features to power an intuitive and increasingly popular news platform.

OrientDB Releases

OrientDB LabsEarlier this month we released OrientDB 2.2.20. This is our latest stable release so if you haven’t upgraded yet, go ahead and download it now. In case you missed it, last month we released our OrientDB 3.0 Milestones Edition. Though not yet suitable for production environments, if you want to test the latest features included in our upcoming 3.0 release, head over to our Labs page.


mi-me-comThis May OrientDB announced their partnership with Chinese system integrator and fraud detection experts MiMe. Using multi-model databases, MiMe is helping companies across China move from antiquated relational systems to modern day, innovative database systems.

Transforming Relational Data

With the release of OrientDB Teleporter last year, OrientDB is being used around the world to synchronise relational data. In fact, we’re the first NoSQL database to enable this feature. Whether it’s data from Oracle, SQLServer, MySQL, PostgreSQL or HyperSQL, Teleporter transforms tables to graphs and allows Relational and NoSQL technologies to coexist.

Stay tuned for more news,

The OrientDB Team

OrientDB Community Awards

OrientDB Community Awards


We value and appreciate the hard work put in by the world-wide OrientDB community. That’s why, as a small token of appreciation, we’ve started sending out some gadgets and rewards to our community members.  


Code Contributors

Saeed Tabrizi

A special thank you to Saeed for his dedication to OrientDB. Among the numerous and valuable contributions, some noteworthy examples include Pull Requests on OrientJS repository in which, among several improvements, he implemented the IF NOT EXIST clause when creating classes and properties, and IF EXIST clause when dropping classes and properties.

Michael Pollmeier

Michael is the original Author of the Apache TinkerPop 3 Graph Structure Implementation for OrientDB, which will be officially supported in upcoming major OrientDB releases!


Community Contributor

Scott Molinari

Not only has Scott provided detailed bug reports and documents, he’s helped countless community members by shedding light on new features and helped countless others experiencing issues.


Thank You for Your Contributions

Thank you to Saeed, Michael and Scott, who as a gesture of appreciation will be receiving a Raspberry Pi 3® Starter Kit along with some OrientDB merchandise (T-shirt, stickers and that kind of stuff)**.


Next time – Bloggers and Writers

We’d also like to send out a special thank you to all the community members writing about OrientDB in their blogs, articles & papers. Thats why next time around we’ll be sending out some more gadgets to our top community bloggers.

So if you’re currently writing about @OrientDB, remember to use the the #OrientDB and #Multimodel tags in your posts and head back to this page regularly. You might find your name on our Top Contributors list!

*All trademarks are the property of their respective owners.
**All OrientDB Community Award winners will be contacted individually in order to receive their prize.


This post is outdated, please refer to the Spark page.



London, July 8, 2016
By Andrea Iacono

The Spark connector for OrientDB has been provided by Metreta and hosted on github at https://github.com/metreta/spark-orientdb-connector, letting Spark and OrientDB interoperate in two ways: accessing OrientDB data from Spark and writing Spark data to OrientDB. The connector is also aware of the difference between an OrientDB document database and an OrientDB graph database:

To compile the connector, clone the master branch and update its build.sbt file with the Scala version and the Spark version you’re using. You may subsequently launch the package command on sbt:

sbt package


Upon performing these steps, you should find a jar file containing the compiled connector in your target directory. Be sure to have created the test database as well (as shown in the connector’s page).

The first step for creating our sample project is to create a build.sbt, where we have to define the library dependencies:

libraryDependencies ++= Seq(
 "com.orientechnologies" % "orientdb-core" % "2.2.3",
 "com.orientechnologies" % "orientdb-client" % "2.2.3",
 "com.orientechnologies" % "orientdb-graphdb" % "2.2.3",
 "com.orientechnologies" % "orientdb-distributed" % "2.2.3",
 "org.apache.spark" % "spark-core_2.11" % "1.6.1",
 "org.apache.spark" % "spark-graphx_2.11" % "1.6.1",
 "org.scala-lang" % "scala-compiler" % "2.11.4",
 "org.scala-lang" % "scala-library" % "2.11.4",
 "org.scala-lang" % "scala-reflect" % "2.11.4",
 "jline" % "jline" % "2.12",
 "com.tinkerpop.blueprints" % "blueprints-core" % "2.6.0",
 "com.fasterxml.jackson.core" % "jackson-databind" % "2.7.4",
 "com.fasterxml.jackson.module" % "jackson-module-scala_2.11" % "2.7.4"


We must then configure Spark to attach to OrientDB, which we can do by defining the SparkConf in the following way:

  val conf = new SparkConf()
    .set("spark.orientdb.clustermode", "remote")
    .set("spark.orientdb.connection.nodes", "")
    .set("spark.orientdb.protocol", "remote")
    .set("spark.orientdb.dbname", "test")
    .set("spark.orientdb.port", "2424")
    .set("spark.orientdb.user", "admin")
    .set("spark.orientdb.password", "admin")


We can now share data between Spark and OrientDB.

Orient Documents to/from Spark RDDs
Let’s start reading some OrientDB documents as a Spark RDD:

var peopleRdd: RDD[OrientDocument] = sc.orientQuery("Person")


With the orientQuery() method, we can read the documents of a class from OrientDB and may have them as a Spark RDD, on which we can do the usual manipulations. We can then save them back to OrientDB:

 .filter(person => person.getString("name") == "John")
 .map(person => new Person("Foo", "Bar"))


Like in this example where, after a bit of manipulation, we use the saveToOrient() method to save all the elements of the RDD as OrientDB documents, we can check both querying OrientDB via Studio or querying from the code:

sc.orientQuery("Person").foreach(p => println(s"Person: ${p.getString("surname")}, ${p.getString("name")}"))


We can also update the OrientDB documents using the upsertToOrient() method, as shown in this example where we update a document’s property via the RDD and save them back to OrientDB:

 .filter(person => !person.getString("surname").startsWith("New"))
 .map(person => new Person(person.getString("name"), "New " + person.getString("surname")))


Orient Graphs to/from Spark GraphX
When we deal with graphs, RDDs are not enough and so we must move to Spark’s API for graph computing: GraphX.

To access OrientDB vertices and edges, we must use the orientGraph() method as shown in this example:

val peopleGraph: Graph[OrientDocument, OrientDocument] = sc.orientGraph()


Since peopleGraph is a org.apache.spark.graphx.Graph object, we can use its methods to access OrientDB data, as in these examples:

val people: VertexRDD[OrientDocument] = peopleGraph.vertices
val relationships: EdgeRDD[OrientDocument] = peopleGraph.edges

println(s"The graph contains ${people.count()} vertices and ${relationships.count()} edges.\n")


We can also access triplets, as in this example where we print friendships among people:

 .foreach(triplet => {
   val srcPerson: OrientDocument = triplet.srcAttr
   val dstPerson: OrientDocument = triplet.dstAttr
   println(s"Person: ${srcPerson.getString("surname")}, ${srcPerson.getString("name")} [${triplet.srcId}]. Friend: ${dstPerson.getString("surname")}, ${dstPerson.getString("name")} [${triplet.dstId}]")


The built-in graph algorithms supplied by GraphX are also available, like the triangleCount() used here to show the triangles among people:

val triangles = peopleGraph.triangleCount()

// prints how many triangles every vertex participate in
 .foreach {
   case (vertexId, trianglesNumber) => println(s"Person [${vertexId}] participates in ${trianglesNumber} triangles.")


When we have a GraphX graph and we want to save it as an OrientDB graph, we can use the saveGraphToOrient():

val gr: Graph[Person, String] = createSampleGraph(sc)


In this example, the createSampleGraph() method just creates a simple graph with three vertices and five edges as RDDs and then builds the graph upon them:

def createSampleGraph(sparkContext: SparkContext): Graph[Person, String] = {

 val people: RDD[(VertexId, Person)] =
       (1L, new Person("Alice", "Anderson")),
       (2L, new Person("Bob", "Brown")),
       (3L, new Person("Carol", "Clark"))

 val edges: RDD[Edge[String]] =
       Edge(1L, 2L, "Friendship"),
       Edge(1L, 3L, "Friendship"),
       Edge(2L, 1L, "Friendship"),
       Edge(3L, 1L, "Friendship"),
       Edge(3L, 2L, "Friendship")
 Graph(people, edges)


This full code of these examples is available on github at https://github.com/andreaiacono/SparkOrientDbConnectorDemo.

Start using the world’s leading multi-model database today