Happy Vinayaka Chavithi to all @EAIESB

  On the Occasion of Lord Ganesh Janmadhin, We EAIESB Employees Actively Participated in Celebrating this Event, As part of celebrations we have Decorated the Office in a deveotional Way and performed in Pooja. Each and every Employee took a part in celebrating this event grandly. Here are the some of the memories when ever we memorise this occasion we can feel as we were blessed.


Machine Learning

Machine Learning! Machine Learning! Machine Learning!!! Is this a new mantra? Then why most of the enterprises are chanting this now? How it is going to manipulate and shape the featuring technology?

Really, will the machines learn the technology going forward? Then why the Machine Learning is meant for and why enterprise/s are inclining towards it?

The importance of Machine Learning is growing radically with the intelligent behavior of this tool, where it predicts the results very accurately for any type of scenarios with in a very short span using the amount of data (historical/stream data) that is available. It helps in predicting the things based on recognizing patterns using wide variety of algorithms.

One best instance, how the enterprises were using Machine Learning, take Amazon, a big giant in online retail shopping, where providing best services to its customers, recommending relevant items for you upon your interest, alerts on discounts and so on, how this is all happening? It collect all the customer patterns based on the historic data it had and predicts their behaviors and delivers what they wants, this makes enterprises to make profits out of it.

This is just a one instance how the Machine Learning helps the enterprises? It will wrap down the human efforts in near future for achieving or performing a task by hundreds of humans with single machine replacement in near future.

How Machine Learning helps Machines?

The machines will become more intelligent and robust through the advanced processes of Machine Learning going forward, so far the machines or robots are designed to do certain sort of things how they were programmed and trained, and they do not have any intelligence to adopt/learn the intelligence of their own.

Nevertheless, this era has been changing hence forth in the coming future, the machines will observe the human behavior patterns for a certain period at work (for instance, driving) how they are performing, it records all those patterns through sensors, emitters, so on and build the intelligence of their own and perform the same tasks (will drive the car) more precisely.

The importance of Machine Learning is growing day by day where the humans cannot write the algorithms for each and every scenario, thus it makes Machine Learning to show its importance to build automatic, adaptive, optimize in every field and take decision itself without human interventions.

Machine Learning Framework

Machine Learning is another field of computer science which build (or become) machines more intelligently without being explicitly programmed. It is broadly classified into three categories:

  1. Supervised Learning
  2. Unsupervised Learning
  3. Reinforcement Learning

There are wide variety of Machine Learning frameworks available in the market, here, providing the list below for your reference:

1. Apache Singa 2. Apache Mahout 3. Azure ML Studio 4. Accord.Net 5. Amazon Machine Learning
6. Caffe 7. H2O 8. Massive Online Analysis (MOA) 9. MLPack 10. Oryx2
11. Pattern 12. Shogun 13. Spark MLib 14. Scikit Learn 15. Tensor Flow
16. Theano 17. Torch 18. Veles


71st Independence Day Celebrations @EAIESB

On the eve of 71st Independence Day Celebrations, We are the Team of EAIESB has actively participated in its celebration, As part of celebration we have decorated our office with tricolored flags, balloons and some decorative objects, we also have conducted some cultural activities where each and every member of the team took a part in all the aspects of games which we have conducted we can grandly announce that this day will be one of the memorable days in Eaiesb’s History. I would like to share some of the moments which have deeply dived into happiness and enjoyed to the heights.

Happy 71st Independence Day to all my fellows of India. @EAIESB


Fetching Node Values From XML or JSON Data Using Mule Expression Language

Recently I have been worked on a Client requirement where in one of the scenario the input source XML or JSON data is coming with multiple node values, but as per the requirement I need to fetch only the particular Node value from the XML file and needs to process the data to the corresponding Trading Partner based on the Node value.

For better demonstration, below is the sample XML and JSON file I have been used here. Here the use case is I would like to fetch the particular <Name> Node value data from the XML and JSON file.


In Mule, there is an inbuilt xpath3 Mule Expression function provided for fetching the required Node value from the XML file.

Syntax:    #[xpath3(‘//CLASS/STUDENTS/NAME’)]

Suppose, if the XML file content is stored and needs to be retrieve from a Session Variable or from any specified variable with in the flow, below is the syntax to use:

Syntax: #[xpath3(‘node root’,message.payload,’STRING’)]

From the above syntax, the ‘node root’ is the XML path of the node that you want to fetch (ex: //CLASS/STUDENTS/NAME’) and ‘message.payload’ is the XML content stored variable (ex: sessionVars.VAR) and STRING is the output data type (ex: Int or string)

If you want change the xpath3 input value to Session variables or some other Here is original syntax of the xpath3 function.

JSON Data:

For suppose if you get an input JSON data from which you want to fetch the node values then follow the below syntax.


In Mule, there is an inbuilt json Mule Expression function provided for fetching the required Node value from the JSON file.

Syntax:  #[json:CLASS/STUDENTS/NAME]


Error_while_downloading_HTTP_Connector 0.8.1 in Mule Anypoint Studio7

Here I came across an issue while adding the Modules in the Mule Anypoint Studio 7 where the studio was not able to resolve the dependencies of HTTP Connector.

After doing some thorough analysis I have found that the root cause for this issue was the Mule Anypoint Studio 7 was loading the VM (virtual Machine) directory as JRE installation directory instead of JDK installation directory.  To view the installation details, go to Help>>Installation Details>>Configuration in Mule Anypoint Studio 7.

Even after changing the Java preferences in the Anypoint Studio 7 the issue was not resolved.


To resolve the issue you have to make sure that Mule Anypoint Studio 7 loads the VM directory from Java JDK. Follow the step-by-step process to change the path.

  1. Open the AnypointStudio.ini file in the Mule Anypoint Studio 7 installation directory.

2. The configuration details in the ini will be as shown below.

3. Now add the –vm followed by the bin directory in Java JDK installation directory followed by exe. Make sure that you add these details above –vmargs.

4. After adding the configuration details, restart the Mule Anypoint Studio7. Once you restart the studio and start creating a new project, you should find HTTP and Sockets modules in the Mule Palettes section.


5. Now go and try to install a new module. Here I will show you adding the File module. Click on Add Module.

6. Now you should find the installed Modules and versions of the module. Now click on Add Extension to add a new module/extension.

7. Now search for the module you are interested to download and select the module.

8. Now you can see the File module has been added successfully without any issues.

By this you have seen how to resolve the issue while downloading the HTTP Connectors and you have also seen adding a File module in Mule Anypoint Studio 7



If you are facing issues while  downloading modules in MULE 4 windows environment or you are not able to change your Java Path in  Mule 4 Studio 7.0?

In Recently released Mule 4 I have faced some issues regarding the modules (File, FTP, HTTP…etc) installations and Java path setting in windows environment. After spending hours of time I came to know that the Mule 4 is not reflecting the updated Java JDK path in the Studio.

Here is a Common Solution for those issues…


Step 1: Open your Mule 4 Installation Location and Open AnypointStudio.ini file as shown below.

Step 2: Open the file and add your Java location in the file as shown below.



Note:  you need to place the java path after the plugins as shown in above.

Now Restart your Mule Studio and download the modules.

Step 3: Now open the project preferences and check the Java version.

Now your java path is Java8\JDK. Now you can able to download the modules.

Boomi Common Logging Error (bCLE) Framework integration for DellBoomi

In the current era of IT space every Enterprise or an individual Customer has to think and look for an intelligent frameworks that has to provide granular level visibility how their business applications are running and how much stable are they really?

In the real time enterprise we never expect when the application failures might occur and inclines towards the business down situation if there are no right methodologies in place to handle it. Handling exceptions in right time is a key aspect for every enterprise.

  • The success factor of any business in the enterprise depends on how stable the Systems are running with zero downtimes.
  • How aggressively the Systems are built and behave in an intelligent manner using the framework/tools to address the issues in prior and alerting the concerned stakeholders in minimal time to avoid system down situations.
  • Has to provide Top-Down and Bottom-Up level transparency to the end users and helps them to understand the issues in a perceive manner even if they were from non-technical background

Does Dell Boomi’s exceptional handling framework will keep monitor the applications and handles all the errors in an intelligent manner? And provides the complete transparency at enterprise and application level?

Really it’s an aaha! moment for any Enterprise or Customer where a single framework/tool that has to provide all the information to track at one place respective to your Atom, Molecule, Application and so on

Yes this has can be achieved through Boomi Common Logging Error (bCLE) framework just by integrating with Boomi.

How bCLE helps Boomi?

Boomi Common Logging Error (bCLE) framework has developed by using the exceptional handling framework of Boomi and developed in a more intelligent way by adding the additional wrappers to it, and provides a feasibility where even a non-technical person can understand and track the status of their business applications through bCLE GUI application.

bCLE Framework Architecture

Benefits of bCLE

Integration of bCLE with Boomi is just a plug-and-play model. It provides the following advantages and features with this integration:

  • Handle exceptions in a Standardized Uniform Structure
  • Publishes Real Time Alerts to Configured Users & Stakeholders respective to the error
  • Enable Role based Access: Track issues Application, Process and Error wise
  • Rich GUI: To do Error search
  • Enable Export & Import: Option provided to share required error log data for analysis
  • Graphical Representation Dashboards: Application, Process and Error wise
  • Solution Repository: SOP’s in place for all resolved issues
  • Mail Notification: Provides accurate error information in detail and also provides SOP number for resolution steps to follow if it is an repeated error
  • Requires Zero downtime for applying the updates (like configuring new Application details or any if there any changes to update in the existing Applications and so on), so that it won’t impact the business and enables smooth running

In this way the bCLE framework is more intelligent enough to handle all the errors that are encountering at Enterprise or Application level during your business day.


Boomi Common Logging & Error Handling Framework

bCLE (Boomi Common Logging & Error Handling Framework)

EAIESB is happy to announce bCLE framework to all Boomi Customers. If you are Dell Boomi Customer, Please register with us bcle@eaiesb.com

We will implement bCLE free of cost. Demos will be available starting from August. Stay tuned


Migrating TIBCO B2B/EDI (Business Connect & Business Works) interfaces to Dell Boomi B2B (Trading Partner) EDI (X12/4010 824 – Application Advice)

How to integrate Spark with Apache Cassandra

How to integrate Spark with Apache Cassandra?

What is Apache Cassandra?

Apache Cassandra is a free and open-source distributed NoSQL database for handling large amounts of structured data through many commodity servers, and provides high available service with no single point of failure. Its supports replication and multi data center replication, scalability, fault-tolerant, tunable consistency, MapReduce support and query language. NoSQL databases are increasingly used in big data and real-time web applications
Code to integrate spark with Apache Cassandra:

Below is the code which will make you to connect with Apache Cassandra.

val conf = new SparkConf()

conf.set(“spark.cassandra.connection.host”, “<provide your host id>”)

conf.set(“spark.cassandra.auth.username”, “Hadoop< provide your username>”)

conf.set(“spark.cassandra.auth.password”, “Hadoop< provide your Password>”)



Get acknowledgement by using the below code

print (“Connection created with Cassandra”)

Create sample data in Apache Cassandra and retrieve using Scala code into Spark:

Here I have installed Apache Cassandra in Linux system so am logging into to Cassandra with the following command.

“cqlsh –u Hadoop –p Hadoop” this command is a combination of hostname, username and a password.

Run the following command to create Keyspace called “employeeDetails”

CREATE KEYSPACE employeeDetails WITH replication = {‘class’:’SimpleStrategy’, ‘replication_factor’ : 1};

To use the keyspace and to create a table in the Cassandra run the following commands.

Use employeeDetails;

CREATE TABLE employeeData(EmpID text PRIMARY KEY,EmpName text,EmpAddress text);

To insert data into employeeData, run the following commands

INSERT INTO employeeData(EmpID,EmpName,EmpAddress ) VALUES ( ‘E121′,’Govardhan’,’Hyderabad’);

Now to read the inserted data, use the following command

Select * from employeeDetails;

Now retrieve this data in spark by executing the following code in spark eclipse.

Note: In order to get Cassandra connection successfully, you need to add spark-cassandra-connector_2.11-2.0.1 jar into eclipse.


import org.apache.spark.SparkConf

import com.datastax.spark.connector._

import org.apache.spark._

import org.apache.spark.SparkContext._

import org.apache.log4j._


object Cassandra {

def main(args: Array[String]) {

// To print errors only


// creating cassandra connection

val conf = new SparkConf()

conf.set(“spark.cassandra.connection.host”, “<provide your host id>”)

conf.set(“spark.cassandra.auth.username”, “Hadoop< provide your username>”)

conf.set(“spark.cassandra.auth.password”, “Hadoop< provide your Password>”)



println(“Connection created with Cassandra”)

val sc =new SparkContext(conf)

val rdd = sc.cassandraTable(“employeedetails”, “employeedata”)




Here you can see the output in the console.

This is how you can integrate Spark with Cassandra.