Container based Big Data
The Data Lake that fits all the way and lets you grow.
Many Big Data projects fails due to the complexity and large investment setting up a Data Lake using the traditional vendors. Before seeing any value, a lot of resources are spent just to get the platform in place.
Learning this, we created a Big Data platform based on a modern container-based architecture which enables the user to start small and grow by time.

Benefits
Easy
Modern framework allows a quick start-up.
We believe that analyzing data should be quickly realized without months spent on development. Our solution is installed with almost no manual configuration.
Scalable
A platform that lets you grow.
We also believe a solution should scale from megabytes to petabytes without having to invest in new platforms.
Future safe
Adoptable and fast.
We use standard component glued together with our own code in a way that any technology can be replaced whenever anything better is available.
The challenges in running a Big Data project
How will you know now what the company will need when the project is finished?
Can you define your project?
Starting a Big Data project is not easy. If you look at the Apache Hadoop stack, which is the most common technology to use, you will find about 50 different projects to choose from. All of them have different use cases depending on the value you want to get. But if you are not experienced it is hard to pick which projects fits you best.
Are you 100% certain what you will need?
Apart from selecting technologies you want to use you also need to select hardware platform, find people for operations and development. All of this means it will be a costly and long project even if you select a distribution of the Apache Hadoop stack.
Then it all changes.
This technology space also changes with a fast pace. Maybe there are products to use out there that are not in the Hadoop stack. Choosing one of the distributors does not help you then because they do not integrate or take responsibility for the solution then. And security will not work cross different technologies, so you have to develop your own custom solution to that. If you invest in one technology now chances are it is old by the time you finally get your platform ready.
And now we have the cloud option.
The last years also cloud technologies such as Docker and Kubernetes have emerged and changes the way we look at developing and deploying solutions. Most of the big data technologies where not built to effectively use and run in those environments.
Inovia's solution
We do it simple.
Inovia early saw the drawbacks of the large projects and legacy solutions to tradition Big Data clusters. We believe that any new idea analyzing data should be quickly realized without months of development. And any solution should scale from megabytes to petabytes without having to reinvent into a new platform.
We have built a modern framework based on Docker and orchestrated on Kubernetes that helps our customers to grow into their Big Data platform delivering value from day one. It can be installed with very little physical or virtual hardware easily without a lot of manual configuration and tuning. Right away security and API:s are in place for developers to use directly or through a CI/CD pipeline. We use standard component glue together with our own code in a way that any technology can be replaces as soon as new better one is available.
Insight set up
Add on features
To get started quickly we install a basic platform glued together with our own code as well as NiFi for data ingestion and Elastic Search as Data Lake. On top of that other technologies can be added.
Hadoop
For analysis of large amounts of data.
(HDFS, Alluxio, Hive, Spark, Impala)
Object Store
For storage and analysis of binary or large files.
Minio, CEPH
NoSQL
For non-tabular data.
FoundationDB, MongoDB
AI-Algorithms
To get more insight from the data.
Statistical methods, Deep learning, Natural language processing
Virtual Assistants
For empowering co-workers, partners and customers.
Realtime data pipelines
To analyze streaming data, high performance messaging and intermediate storage
Kafka, Breeze, NATS
Workflow engines
To programmatically author, schedule, and monitor workflows.
AirFlow
Data Ingestion
For in a stable, controlled and monitored way collect any type of data
NiFi, voice, images, gauges
Notebook support
For data-driven and interactive data analytics.
Zeppelin, Jupyter
Graph support
For storing and querying graphs containing hundreds of billions of vertices and edges.
JanusGraph
Customized package ensures scalability
Selected components carefully packaged to ensure a perfect fit.
The added components are carefully selected and packaged to fit the needs of the customers. If possible, cloud native components are used to ensure scalability and operational ease.
The platform is designed to enable serverless computing to ensure scalability and that no resources are used without a purpose.

“We turned a machine park from 1961 to an amazing profit industry.”
– Rune Eliasson
Owner of Saxnäs Hydro Power Plant.
User license
We have subscription plans for all sizes of business.
With an annual subscription which is billed monthly, continuous updates are always guaranteed, with the latest features securely in place ready to use.
Please let sales contact to customize an offer suitable to your needs >
Environment
Choose the technology environment that suits your business.
Inovia SaaS Service
Our own Cloud Service enables fast start-up and full utilization of functionality. It doesn’t require any maintenance and includes all updates and improvements to language models.
Our state-of-the-art data center is located in Sweden.
Optional Cloud provider
We deliver to a Cloud provider of your own choice.
On premises
Use your own servers and install inhouse (Docker, virtual servers).
How do you start?
What is the best strategy for my company?
How to define a use case to structure your goals? We’ll help you sort out and set up a strategy.