More than 2.5 quintillion bytes of data—as much as 250,000 times the printed material in the U.S. Library of Congress—come into existence every day. What this data means for the average enterprise is opportunity: the opportunity to improve fraud protection, compliance and personalization of services and products.
But first, you need to make sure you are working with the right data and that your data is consistent and clean.
While data governance itself is not a new concept, the need for significantly better data governance has grown with the volume, variety and velocity of data. With this need for better data governance has come a need for better databases. Before we get into that, let’s make sure we’re clear on what data governance is and how it’s used.
Data governance is the establishment of processes around data availability, usability, consistency, integrity and security, all of which fall into the three pillars of data governance.
In an age when data silos run rampant and “bad data” is blamed for nearly every major strategic oversight at an enterprise, it’s critical to have someone or something at the ready to ensure business users have high-quality, consistent and easily accessible data.
Enter data stewardship and the “data steward.” A data steward ensures common, meaningful data across applications and systems. This is much easier said than done, of course, and quite often the problems with data stewardship arise from a lack of clarity or specificity around the data steward’s function, as there are many ways to approach it (i.e., according to subject area, business function, business process, etc.).
Nevertheless, properly stewarding data has become a key ability for today’s enterprises and is a key aspect of proper data governance at any organization.
Where data governance itself is the policies and procedures around the overall management of usability, availability, integrity and security of data, data quality is the degree to which information consistently meets the expectations and requirements of the people using it to perform their jobs.
The two are, of course, very intertwined, although data quality should be seen as a natural result of good data governance, and one of the most important results that good data governance achieves.
How accurate is the data? How complete? How consistent? How compliant? These are all questions of data quality, and they are often addressed via the third pillar of data governance: master data management.
Master data management, or MDM, is often seen as a first step towards making the data usable and shareable across an organization. Enterprises are increasingly seeking to consolidate environments, applications and data in order to:
MDM is a powerful method used to achieve all of the above via the creation of a single point of reference for all data.
Considering the recent Facebook fiasco with personal data, and with big regulations like the General Data Protection Regulation (GDPR) now in effect, it’s impossible to understate the importance of data governance.
NoSQL databases were designed with modern IT architectures in mind. They use a more flexible approach that enables increased agility for development teams, which can evolve the data models on the fly to account for shifting application requirements. NoSQL databases are also easily scalable and can handle large volumes of structured, semi-structured and unstructured data.
Graph databases can be implemented as native graphs, while non-native graph databases, which are slower, store data in relational databases or other NoSQL databases (such as Cassandra) and use graph processing engines for data access. Graph databases are well-suited for applications traversing paths between entities or where the relationship between entities and their properties needs to be queried.
This relationship-analysis capability makes them ideal for empowering solid data governance at organizations of all types and sizes. From fraud protection to compliance to getting a complete view of the customer, a NoSQL graph database makes data governance much easier and much less costly.
To learn more about how to use a NoSQL graph database for data governance, click here.
Senior Account Executive, OrientDB, an SAP Company
Today, banks, credit unions and other financial services firms must cross many data chasms to keep customer information secure, accessible and functional for modern service application development and risk detection. Yet, the intertwined nature of aging data architectures and tangled data lineages make rapid application development difficult to accomplish—and buries organizations in technical debt.
Technical debt refers to the financial, time and risk costs of running services and applications from outdated system design or IT infrastructure.
Within financial services firms, IT architecture is built upon multiple databases and software systems in varying stages of their lifecycle. DevOps teams at financial firms are likely running multiple instances of Windows, Linux and numerous desktops connected across physical hardware and networks. Customer data is spread across Oracle, SQL, Hadoop and Hive databases and accessed via insecure mobile devices, mobile applications, online banking systems, lending platforms, banking centers and more. As individual platforms reach their usage limits, the code on which they were built can become outdated, cause latency issues and expose areas of risk in older systems.
On top of the costs of running, maintaining and deploying those systems, many institutions are stuck servicing old HR, CRM or other databases with monthly retainers to software vendors just to host and access data hosted within redundant systems. Often, there isn’t an easy way to offload this data into a useable format without tremendous cost and effort.
The result? Financial institutions want to upgrade and innovate but are stuck paying thousands in service and maintenance fees for data infrastructures that are too inflexible, outdated and risk-prone to modify.
This technical debt builds and builds until a company is stuck with an inert, messy platform that makes decommissioning legacy applications extremely complex; implementing new applications, like mobile banking, within the current data topology expensive; and detecting fraud and performing risk assessment like finding a needle in a haystack.
Instead of trying to mine disconnected data within limited relational databases, multi-model graph databases reduce technical debt by correlating relationships between many different data types.
Deploying a multi-model graph database enables DevOps teams to start running different functions within the same platform, not multiple platforms, service layers and data centers that all need to be maintained.
On-demand visualization models show how application changes will affect the entire IT ecosystem, providing a better snapshot of the financial impact of application development and deployment for new tech-based customer services.
This structure increases the “plugability” of new SaaS, cloud solutions and integrated customer applications. With multi-model graph solutions automatically connecting new nodes and data formats, financial services firms can quickly innovate on top of their existing network and service layers without outdated code and data getting in the way.
In short, multi-model graph databases can turn DevOps from a cost center to an innovation center. The cost efficiency of having accessible data can be used to build and drive even more cost savings through:
Once technical debt has been transformed into a surplus of accessible data, data lineage becomes more much manageable and a strong strategic asset for financial services DevOps teams servicing business units that want to modernize their operations.
Gerard (Jerry) Grassi, P.E.
Senior Vice President, OrientDB, an SAP Company
The power of graph database solutions means that creating data and finding relationships between data sets is not constrained by defined data classifications, formats, storage locations or original data structure.
By storing and monitoring relationship data, graph solutions enable organizations to act on changes to their data model without the limitations of a set database structure, known as schemas. A database that isn’t limited to one schema type also simplifies data modeling and querying of connected data.
Although graph solutions support schema-less use, you might still want to use a schema to enforce some structure within your data model and use.
The data structure you choose to use depends on:
Here’s how schema types can enhance your graph database use and what to consider when aligning data schemas with graph database deployment.
In traditional relational database use, schemas include tables, views, indexes, foreign keys and check constraints. A graph database such as OrientDB still includes a basic schema with nodes (data entities), vertex and edge objects, in which edges store information about a particular data relationship.
The degree to which you define the classes of these edges and vertices depends on your graph database needs. Graph solutions simply provide more flexibility in how you define your data model.
OrientDB’s graph solutions support three types of schema options:
With graph solutions, you can define each schema type as you create the structure of your graph database.
The type of schema you choose to power your graph solution ultimately depends on the kinds of questions you want your graph solution to infer from your data relationships. The ultimate difference between the different schema types is how specifically you define the constraints on the types of nodes and the allowed relationships between the node types.
There’s no one-size-fits-all approach, but the schema flexibility of graph solutions allows you to think about why you are querying data instead of what data when building your graph database solution.
For example, if you’re building an application for recommended services to existing customers, such as upselling financial packages for banking members, you can likely use a schema-hybrid model to define the nodes and edges with specific data types.
If you’re trying to uncover unforeseen relationships between data sets, such as in fraud-detection applications, a schema-less model enables you to adjust relationship guidelines as the database generates real-time visualizations.
The schema model you choose also depends on how you’d like to scale the database for use with new or changing data inputs, systems and use cases.
Graph databases shine here because relationships and vertex types are created as new data comes into the system, allowing your database to “expand” as business use changes.
As you scale your graph solution to different areas of your business, the schema you have in place will impact how you build traversals between data. Perhaps you started with a customer targeting application using a schema-hybrid. As that grows, you might want to move to a schema-less model to extract even more data around the results of that targeting and use it to infer relationships between customer use and product innovation. Discovering or creating new relationships between data types and applications works best with flexible system design, which a schema-less model can provide. In this instance, using a dynamic language can help modify or eliminate data classes for a less rigid design.
Likewise, if you started with a schema-less strategy when building your graph database, you might find you want to enforce certain data quality standards or governance rules as more applications or inputs connect to your database. Or, perhaps, you want to bring in legacy schema indexes and represent those structures within your graph solution. In that case, it might make sense to switch to a schema-hybrid model or schema-full strategy with more defined global relationship types and rules within your database. Graphical query tools can enable developers to start building more structure into their existing database.
Graph solutions already have a major performance advantage over schema-enforced databases. With OrientDB, users can store up to 120,000 records per second in nodes and process transactions about ten times faster than other databases with defined schemas.
Still, if you’re enforcing more defined types of rules, such as mandatory, unique or null constraints, within your schema model, it’s important to test how that structure and the applications you’re using will impact transactional output, such as:
The schema model you choose is highly dependent on the applications you want to build and how your organization wants to leverage graph databases. No matter which model you choose, graph databases will enforce data nodes and classes to maintain data integrity. However, a schema is hardly set in stone. One of the major advantages of the graph model is that it supports multiple types of schemas side-by-side and enables schema constraints to be reconfigured as needs change.
Director of Consulting, OrientDB, an SAP Company
Developers have always had to do more with less. What the OrientDB team has loved about working with developers over the last eight years is learning all the ways in which they’ve innovated around complex data challenges even as data types, formats and application usage have changed.
When OrientDB founder Luca Garulli created our database management system, he wanted to empower developers’ unsung innovation by freeing them from the chains of monolithic data formats and use. His mission was to create a high-performance transaction graph database that enables developers to:
OrientDB’s goal has always been to offer a solution that gives developers all the tools they need, in one place, to build innovative applications that meet their unique business challenges. It goes beyond providing an open-source product; at OrientDB, we aim for an open innovation strategy that makes not just the code, but the business transformation steps, accessible to developers from all types of industries.
We’re happy to announce that we’ve launched the next phase of this mission: OrientDB.org, a free, one-stop resource for downloading, using, optimizing and deploying the OrientDB graph database solution. Built just for developers, the site includes product documentation, help files, case studies, training materials and release notes to help developers in every step of graph database use, from download to deployment.
Luca built OrientDB in response to the challenges developers face as new business applications come into play across the enterprise. When database technology was invented 40 years ago, developers didn’t have to contend with capturing and managing unstructured data from social networks, mobile applications and big-data analytics.
Now, application developers are faced with the task of:
Over the years, we’ve baked features into the OrientDB platform that enable developers to solve these challenges, including our Teleporter migration tools, auditing capabilities, offline monitoring, database backups without delays, and dynamic-distribution configuration and clustering.
The work of an application developer is always a moving target, though, as new business needs, data inputs and business goals shift.
Even as data volumes and formats have grown, developers have continued to create cutting-edge applications using OrientDB. They’ve not only found a way to absorb and work with uncharted data types, but they’ve spun them up into next-generation business applications, from responsive, geospatial network management for telecommunications to real-time data governance reporting.
In 2019, our goal isn’t just to provide developers with the tools they need to use graph solutions; we want to empower them to build the most powerful cloud business applications in the world.
When you’re building next-generation applications from scratch in your enterprise, there’s not often a blueprint for how to do it. With OrientDB.org, we’ve centralized instructions on the many use cases and applications our customers have built using the OrientDB platform. The site includes deep insight into how they’ve built and deployed those solutions, on both a technical and transformational level.
Take advantage of the institutional knowledge from a multitude of developers across the world by checking out the OrientDB.org site. It’s our heartfelt Valentine’s Day gift to the innovative application developers working hard to move their projects forward every day!
The OrientDB Team
OrientDB, an SAP Company
What if average citizens were able to quickly experiment with public government spending data to determine whether any officials were misusing taxpayer funds?
That’s the question Gabriel Mesquita, a software developer and computer scientist from Brazil, recently set out to answer.
In a post on Hacker Noon, Mesquita explored whether any Brazilian government officials were using their monthly allowances illegally by buying products and services from companies owned by people they know.
In his experimental attempt to detect fraudulent patterns in spending, he turned to OrientDB, the world’s fastest NoSQL database.
After obtaining the public data, Mesquita built a data model that leveraged graph database technology. It’s in Portuguese, but here’s what the model looks like:
Mesquita’s model detects which deputies performed multiple transactions with specific companies, whether those companies donated to the specific deputy’s campaign and whether the deputy has any connections, directly or indirectly, to each company in question.
The results? Seven deputies spent their monthly allowances with companies that supported their campaigns in 2014. Another deputy received a donation from a company and then used taxpayer money three different times to support that company.
None of this behavior is illegal, Mesquita suggests.
But, in support of transparency and to serve as another check and balance on politicians, it’s important that taxpayers know about it.
Mesquita identified two major takeaways from his research:
Since data pertaining to friends and relatives of politicians isn’t available in Brazil, Mesquita used “fake data” to flesh out his model.
“To validate the model and the whole process I inserted fake data to simulate the fraud scenarios,” Mesquita writes. “Hopefully, if the Chamber of Deputies has this kind of data, they could use the same process to inspect the deputies’ expenses.”
Because he couldn’t access all of the real-world data needed to truly test his thesis, Mesquita’s exercise was experimental in nature.
Still, he found the right tool in OrientDB.
“OrientDB is a great multi-model DBMS,” Mesquita writes. “Graphs are great [at] exposing relationships,” and OrientDB “is a viable solution to find patterns with open data and to provide transparency for our population.”
For more details on Mesquita’s project, read the full piece on Hacker Noon.
To learn more about the world’s leading multi-model graph database and NoSQL solution, visit https://orientdb.com.
Senior Marketing Director, OrientDB, an SAP Company
In the world of master data management, silos are a tremendous challenge.
When enterprises try to process information from disparate systems, they too often use sub-optimal applications and initiatives laden with errors and misinformation, not to mention blown timelines and budgets. But master data management (MDM) is actually more than just the breaking down of data silos. It’s about efficiency and service, innovation and security, clarity and perspective. It’s about getting the most of your most valuable resource: your data.
Here are the five things you need to know about MDM:
For existing enterprises, one of the largest hurdles to developing an MDM system is the multiplicity of databases and applications usually involved. What’s more, Enterprise Resource Planning and Customer Relationship Management systems rely on structured data, whereas the proliferation of IoT devices has created exponential growth in unstructured data.
Take the example of Enel, which is one of the largest power utilities in Europe. Enel was struggling to provide analytics and reporting across all of its power generation plants and equipment. They see data flowing in from multiple systems, including IoT devices on power generation equipment, plant maintenance systems, scheduling applications, and other sources. Each of these data sources has its own data types. Enel was exporting data to csv files and manually aligning the data to generate reports and analytics.
Other companies in similar scenarios might invest in expensive integration bus systems to support a polyglot persistent environment.
Enel found a solution in a native multi-model database. This allowed them to bring all data into a single database. This means no more worrying about different data types or keeping the different systems in sync. The result was real-time data analysis across all sites and multiple data systems. No more month-long manual processes to manually generate reports.
All companies are now digital enterprises. Since all systems rely on data, MDM is a discipline in which all organizations need to remain competitive. Master data powers everything from financial reporting to real estate transactions to fraud protection. The ultimate results are faster and better decisions, improved customer satisfaction, enhanced operational efficiency, and a better bottom line.
Most people who’ve heard of MDM immediately link it to one of its primary objectives: the elimination of redundant data. Yes—having a central repository of data will eliminate data redundancies, as long as it’s done correctly. But the benefits of MDM extend beyond redundancy elimination. Namely: data consistency, data flexibility, data usability, and data security (from role-based access).
Mergers and acquisitions can be rough on data consistency. Reconciling several master data systems brings headaches from different data system taxonomy and structures. This usually results in two systems remaining separate and linked only through a special reconciliation process.
As more acquisitions and mergers occur, the problem compiles into a labyrinth of siloed systems and data. This brings you back the problem that spurned you to invest in MDM in the first place.
The answer lies in the database management system and vendor you choose for your master data MDM system. Make sure to choose a vendor that offers a flexible, multi-model database that allows you to easily develop a single data taxonomy.
The most powerful and effective MDM systems run on databases that fit the business model in question.
As an example, Xima Software uses networks that are like graphs. As such, for a telco, an MDM system via a multi-model graph database is the most effective MDM strategy because the database allows for easy visualization of the network since it uses the same graph model.
If there were a fifth thing you needed to know about MDM, it’s that it’s rapidly evolving to meet the needs of today’s enterprises and their customers. Retailers are using it to improve time-to-market and address their customers’ growing expectations to deliver a true omnichannel experience. The consumer packaged goods industry is using it to ensure the accuracy of nutritional information and comply with local disclosure regulations. And every industry is using it to break down data silos.
Gerard (Jerry) Grassi, P.E.
Senior Vice President – OrientDB
London – March 5, 2018
If you are using 2.2.x series, please upgrade your production environments to v2.2.33. View our Change Log for a full list of new functionalities, bug fixes and other improvements.
Download The Latest OrientDB (v2.2.33) version:
If you are currently using a previous version of OrientDB, we recommend you upgrade using the link above. However, if you would like to download previous versions of OrientDB Community Edition, you may do so here: http://orientdb.com/download-previous/
Team and Contributors
A big thank you goes out to the OrientDB team and all the contributors who worked hard on this release, providing pull requests, tests, issues and comments.
Director of Consulting
In case you missed some of the latest news and haven’t subscribed to our newsletter below, here’s a quick recap of this month’s news. Find out how OrientDB can help your startup company with our latest case study and how companies across the world are using graph technology to secure their systems. Stay up-to date with our latest stable release or test the new features in our 3.0 milestones release.
If you want to learn about how OrientDB can help your startup company, take a look at our latest case studies. Find out how New.sc uses graphs and multi-model features to power an intuitive and increasingly popular news platform.
Earlier this month we released OrientDB 2.2.20. This is our latest stable release so if you haven’t upgraded yet, go ahead and download it now. In case you missed it, last month we released our OrientDB 3.0 Milestones Edition. Though not yet suitable for production environments, if you want to test the latest features included in our upcoming 3.0 release, head over to our Labs page.
This May OrientDB announced their partnership with Chinese system integrator and fraud detection experts MiMe. Using multi-model databases, MiMe is helping companies across China move from antiquated relational systems to modern day, innovative database systems.
With the release of OrientDB Teleporter last year, OrientDB is being used around the world to synchronise relational data. In fact, we’re the first NoSQL database to enable this feature. Whether it’s data from Oracle, SQLServer, MySQL, PostgreSQL or HyperSQL, Teleporter transforms tables to graphs and allows Relational and NoSQL technologies to coexist.
Stay tuned for more news,
The OrientDB Team
We value and appreciate the hard work put in by the world-wide OrientDB community. That’s why, as a small token of appreciation, we’ve started sending out some gadgets and rewards to our community members.
A special thank you to Saeed for his dedication to OrientDB. Among the numerous and valuable contributions, some noteworthy examples include Pull Requests on OrientJS repository in which, among several improvements, he implemented the IF NOT EXIST clause when creating classes and properties, and IF EXIST clause when dropping classes and properties.
Michael is the original Author of the Apache TinkerPop 3 Graph Structure Implementation for OrientDB, which will be officially supported in upcoming major OrientDB releases!
Not only has Scott provided detailed bug reports and documents, he’s helped countless community members by shedding light on new features and helped countless others experiencing issues.
Thank you to Saeed, Michael and Scott, who as a gesture of appreciation will be receiving a Raspberry Pi 3® Starter Kit along with some OrientDB merchandise (T-shirt, stickers and that kind of stuff)**.
We’d also like to send out a special thank you to all the community members writing about OrientDB in their blogs, articles & papers. Thats why next time around we’ll be sending out some more gadgets to our top community bloggers.
So if you’re currently writing about @OrientDB, remember to use the the #OrientDB and #Multimodel tags in your posts and head back to this page regularly. You might find your name on our Top Contributors list!
|Follow @OrientDB||Tweet #OrientDB|
*All trademarks are the property of their respective owners.
**All OrientDB Community Award winners will be contacted individually in order to receive their prize.
By OrientDB CEO, Luca Garulli
After ransomware groups recently wiped off about 34,000 MongoDB database and exposed about 35,000 Elastic Search databases on the Internet*(read the full article), we advise that OrientDB users double check their OrientDB server.
OrientDB’s average level of security is much stronger than both MongoDB and ElasticSearch. However, nothing can keep you totally safe, specially if you are exposing an OrientDB server directly to the Internet and/or you haven’t changed the default password in your database.
1. If you aren’t using the default users (admin, reader and writer), then delete them.
2. If you’re using them, be sure you changed the password for all 3 default users: admin, reader and writer.
3. When you installed OrientDB for the first time, the script asked for the root password. Make sure you didn’t set something obvious such as “root“, “orientdb“, “password“, or any other simple/obvious password.
1. If you can, don’t expose the OrientDB server to the Internet.
2. Remember that starting from v2.2 you can configure stronger SALT cycles for hashed passwords. Take a look at the following page for more details: https://orientdb.com/docs/
3. If you’re working with very sensitive data, please consider using Encryption at REST with AES algorithm. For more details, take a look at the following page: http://orientdb.
4. Don’t use a password at all. Since v2.2.14, OrientDB Enterprise Edition supports authentication via symmetric keys for the Java client. See https://orientdb.com/docs/2.2/Security-Symmetric-Key-Authentication.html.
For any question, don’t hesitate to ask to the Community Group.
Thanks and keep your data safe!
Founder & CEO