Boomi Common Logging Error (bCLE) Framework integration for DellBoomi

In the current era of IT space every Enterprise or an individual Customer has to think and look for an intelligent frameworks that has to provide granular level visibility how their business applications are running and how much stable are they really?

In the real time enterprise we never expect when the application failures might occur and inclines towards the business down situation if there are no right methodologies in place to handle it. Handling exceptions in right time is a key aspect for every enterprise.

  • The success factor of any business in the enterprise depends on how stable the Systems are running with zero downtimes.
  • How aggressively the Systems are built and behave in an intelligent manner using the framework/tools to address the issues in prior and alerting the concerned stakeholders in minimal time to avoid system down situations.
  • Has to provide Top-Down and Bottom-Up level transparency to the end users and helps them to understand the issues in a perceive manner even if they were from non-technical background

Does Dell Boomi’s exceptional handling framework will keep monitor the applications and handles all the errors in an intelligent manner? And provides the complete transparency at enterprise and application level?

Really it’s an aaha! moment for any Enterprise or Customer where a single framework/tool that has to provide all the information to track at one place respective to your Atom, Molecule, Application and so on

Yes this has can be achieved through Boomi Common Logging Error (bCLE) framework just by integrating with Boomi.

How bCLE helps Boomi?

Boomi Common Logging Error (bCLE) framework has developed by using the exceptional handling framework of Boomi and developed in a more intelligent way by adding the additional wrappers to it, and provides a feasibility where even a non-technical person can understand and track the status of their business applications through bCLE GUI application.

bCLE Framework Architecture

Benefits of bCLE

Integration of bCLE with Boomi is just a plug-and-play model. It provides the following advantages and features with this integration:

  • Handle exceptions in a Standardized Uniform Structure
  • Publishes Real Time Alerts to Configured Users & Stakeholders respective to the error
  • Enable Role based Access: Track issues Application, Process and Error wise
  • Rich GUI: To do Error search
  • Enable Export & Import: Option provided to share required error log data for analysis
  • Graphical Representation Dashboards: Application, Process and Error wise
  • Solution Repository: SOP’s in place for all resolved issues
  • Mail Notification: Provides accurate error information in detail and also provides SOP number for resolution steps to follow if it is an repeated error
  • Requires Zero downtime for applying the updates (like configuring new Application details or any if there any changes to update in the existing Applications and so on), so that it won’t impact the business and enables smooth running

In this way the bCLE framework is more intelligent enough to handle all the errors that are encountering at Enterprise or Application level during your business day.

 

Boomi Common Logging & Error Handling Framework

bCLE (Boomi Common Logging & Error Handling Framework)

EAIESB is happy to announce bCLE framework to all Boomi Customers. If you are Dell Boomi Customer, Please register with us bcle@eaiesb.com

We will implement bCLE free of cost. Demos will be available starting from August. Stay tuned

 

Migrating TIBCO B2B/EDI (Business Connect & Business Works) interfaces to Dell Boomi B2B (Trading Partner) EDI (X12/4010 824 – Application Advice)

How to integrate Spark with Apache Cassandra

How to integrate Spark with Apache Cassandra?

What is Apache Cassandra?

Apache Cassandra is a free and open-source distributed NoSQL database for handling large amounts of structured data through many commodity servers, and provides high available service with no single point of failure. Its supports replication and multi data center replication, scalability, fault-tolerant, tunable consistency, MapReduce support and query language. NoSQL databases are increasingly used in big data and real-time web applications
Code to integrate spark with Apache Cassandra:

Below is the code which will make you to connect with Apache Cassandra.

val conf = new SparkConf()

conf.set(“spark.cassandra.connection.host”, “10.0.0.00<provide your host id>”)

conf.set(“spark.cassandra.auth.username”, “Hadoop< provide your username>”)

conf.set(“spark.cassandra.auth.password”, “Hadoop< provide your Password>”)

conf.setMaster(“local[*]”)

conf.setAppName(“CassandraIntegration”)

Get acknowledgement by using the below code

print (“Connection created with Cassandra”)

Create sample data in Apache Cassandra and retrieve using Scala code into Spark:

Here I have installed Apache Cassandra in Linux system so am logging into to Cassandra with the following command.

“cqlsh 10.0.0.27 –u Hadoop –p Hadoop” this command is a combination of hostname, username and a password.

Run the following command to create Keyspace called “employeeDetails”

CREATE KEYSPACE employeeDetails WITH replication = {‘class’:’SimpleStrategy’, ‘replication_factor’ : 1};

To use the keyspace and to create a table in the Cassandra run the following commands.

Use employeeDetails;

CREATE TABLE employeeData(EmpID text PRIMARY KEY,EmpName text,EmpAddress text);

To insert data into employeeData, run the following commands

INSERT INTO employeeData(EmpID,EmpName,EmpAddress ) VALUES ( ‘E121′,’Govardhan’,’Hyderabad’);

Now to read the inserted data, use the following command

Select * from employeeDetails;

Now retrieve this data in spark by executing the following code in spark eclipse.

Note: In order to get Cassandra connection successfully, you need to add spark-cassandra-connector_2.11-2.0.1 jar into eclipse.

 

import org.apache.spark.SparkConf

import com.datastax.spark.connector._

import org.apache.spark._

import org.apache.spark.SparkContext._

import org.apache.log4j._

 

object Cassandra {

def main(args: Array[String]) {

// To print errors only

Logger.getLogger(“org”).setLevel(Level.ERROR)

// creating cassandra connection

val conf = new SparkConf()

conf.set(“spark.cassandra.connection.host”, “10.0.0.00<provide your host id>”)

conf.set(“spark.cassandra.auth.username”, “Hadoop< provide your username>”)

conf.set(“spark.cassandra.auth.password”, “Hadoop< provide your Password>”)

conf.setMaster(“local[*]”)

conf.setAppName(“CassandraIntegration”)

println(“Connection created with Cassandra”)

val sc =new SparkContext(conf)

val rdd = sc.cassandraTable(“employeedetails”, “employeedata”)

rdd.foreach(println)

}

}

Here you can see the output in the console.

This is how you can integrate Spark with Cassandra.

 

How to integrate Spark with Oracle DB 12c

How to integrate Spark with Oracle DB 12c?

What is Oracle DB 12c?

Oracle Database 12c is a high-performance, enterprise-class database and it is the first database designed for the cloud. It introduced new features like pluggable databases and multitenant architecture. It also features the Oracle Database 12c In-Memory, an optional add-on that provides in-memory capabilities. This in-memory option makes Oracle Database 12c the first Oracle database to offer real-time analytics.

Code to integrate spark with Oracle 12c:

Below is the code which will make you to connect with oracle 12c.

val dataframe_oracle = spark.read.format(“jdbc”)

.option(“url”, “jdbc:oracle:thin:system/<your DB password>@//localhost:1521/orcl12c”)

.option(“driver”, “oracle.jdbc.driver.OracleDriver”)

.option(“dbtable”, “system.employee<your table name>”)

.load()

Get acknowledgement by using the below code

print (“Connection created with HBase”)

Create sample data in Oracle 12c and retrieve using Scala code into Spark:

Open oracle12c SQL Shell and login with your credentials.

Run the following command to create table called “books”

CREATE TABLE system.books (

book_id VARCHAR2(20),

title VARCHAR2(50));

To insert data into employee, run the following commands

INSERT INTO system.books(book_id,title)VALUES(1021,’Oracle12c’);

INSERT INTO system.books(book_id,title)VALUES(1021,’SparkBasics’);

Now to read the inserted data using the following command

Select * from books;

Then run “commit” command to save the records into table.

Now retrieve this data in spark by executing the following code in spark eclipse.

Note: In order to get oracle12c connection successfully, you need to add ojdbc7 jar into eclipse.

 import org.apache.spark.sql._

import org.apache.spark._

import org.apache.spark.SparkContext._

import org.apache.log4j._

import java.sql.DriverManager

import java.sql.Connection

 

object Oracle {

   def main(args: Array[String]): Unit = {

  Logger.getLogger(“org”).setLevel(Level.ERROR)

    val spark = SparkSession

     .builder()

     .appName(“JDBC”)

     .master(“local[*]”)

     .config(“spark.sql.warehouse.dir”, “C:/Exp/”)

     .getOrCreate();   

   val dataframe_oracle = spark.read.format(“jdbc”)

                         .option(“url”, “jdbc:oracle:thin:system/<your Password here>@//localhost:1521/orcl12c”)

                         .option(“driver”, “oracle.jdbc.driver.OracleDriver”)

                         .option(“dbtable”, “system.books”)

                         .load()     

    import spark.implicits._   

    val employee = dataframe_oracle.toDF()   

    employee.printSchema()   

    employee.createOrReplaceTempView(“books”)

    val results = spark.sql(“SELECT * from books”).collect()

    results.foreach(println)

}  

}

Here you can see the output in the console.

 

This is how you can integrate Spark with Oracle DB 12c.

 

How to integrate spark with HBase and get sample data from HBase

  • What is HBase?

HBase is an open source, non-relational (NoSQL) database that runs on top of HDFS (Hadoop Distributed File System) and which provides real-time read/write access to your Big Data. It can store very large tables i.e. billions of rows N millions of columns. One can store the data in HDFS either directly or through HBase and it provides fast lookups for larger tables.

  • Code to integrate spark with HBase:

Below is the code which will make you to connect with HBase.

val conf = HBaseConfiguration.create()
conf.set(“hbase.zookeeper.quorum”, “10.0.x.xx(Provide your ip address where it is installed i.e. local or server)”)
conf.set(“hbase.zookeeper.property.clientPort”,”2181″);

Get acknowledgement by using the below code

print (“Connection created with HBase”)

  • Create sample data in Hbase and retrieve using Scala code into Spark:

Open HBase shell by using the following command “hbase shell”

Run the following command to create table called “employee”
create ’employee’, ’emp personal data’, ’emp professional data’

To insert data into employee, run the following commands

put ’employee’,’1′,’emp personal data:name’,’govardhan’
put ’employee’,’1′,’emp personal data:city’,’kurnool’
put ’employee’,’1′,’emp professional data:designation’,’manager’
put ’employee’,’1′,’emp professional data:salary’,’30000′

Now to read the inserted data use the following command
Scan ‘employee’

Now retrieve this data in spark by using the following code in spark eclipse.

Note: in order to get HBase connection successfully, you need to add the same version of jars into the eclipse, for example if you have installed HBase 1.2.5 then you need to add HBase 1.2.5 jar files into eclipse.

package com.hbase

import org.apache.hadoop.conf.Configuration

import org.apache.hadoop.hbase.util.HBaseConfTool

import org.apache.hadoop.hbase.HBaseConfiguration

import org.apache.hadoop.hbase.client.Connection

import org.apache.hadoop.hbase.client.ConnectionFactory

import org.apache.hadoop.hbase.TableName

import org.apache.hadoop.hbase.util.Bytes

import org.apache.hadoop.hbase.client.Put

import org.apache.hadoop.hbase.client.Get

object tessst {

  val conf: Configuration = HBaseConfiguration.create()

  def main(args: Array[String]): Unit = {

    conf.set(“hbase.zookeeper.quorum”, “10.0.x.xx (Provide your ip address where it is installed i.e. local or server)”)“)

    conf.set(“hbase.zookeeper.property.clientPort”, “2181”)

    val connection: Connection = ConnectionFactory.createConnection(conf)

      for (rowKey <- 1 to 1) {

      val result = table.get(new Get(Bytes.toBytes(rowKey.toString())))

      val nameDetails = result.getValue(Bytes.toBytes(“emp personal data”), Bytes.toBytes(“name”))

      val cityDetails = result.getValue(Bytes.toBytes(“emp personal data”), Bytes.toBytes(“city”))

      val designationDetails = result.getValue(Bytes.toBytes(“emp professional data”), Bytes.toBytes(“designation”))

      val salaryDetails = result.getValue(Bytes.toBytes(“emp professional data”), Bytes.toBytes(“salary”))    

      val name = Bytes.toString(nameDetails)

      val city = Bytes.toString(cityDetails)

      val designation = Bytes.toString(designationDetails)

      val salary = Bytes.toString(salaryDetails)

      println(“Name is ” + name + “, city ” + city + “, Designation ” + designation + “, Salary ” + salary)

    }

  }

}

Now execute the code to retrieve the data from HBase.

 

This is how you can integrate Spark with HBase.

Enabling Auto Deploy Trigger functionality in Bamboo

Enabling Auto Deploy Trigger functionality in Bamboo

Here I came across a requirement where to auto trigger deployment of the job after proper build in Bamboo, but unfortunately there is no option or feature available in Bamboo for auto deployment job.

By default, below are the only two separate features available in Bamboo:

  1. Build
  2. Deploy

By using these options the Administrator can perform any of above one action at once, either Build or Deploy. But it cannot be possible for auto trigger deployment of job as soon as build job is completed.

But, after my thorough research, I have identified this auto-trigger deployment of the job can be achievable through custom plugin ‘Bamboo After Deployment Trigger plugin’ which can be available from Atlassian Market Place for download.

Providing the link below for your reference to download the required plugin:

https://marketplace.atlassian.com/plugins/com.atlassianlab.Bamboo.plugins.Bamboo-after-deployment-trigger-plugin/server/overview

 Solution:
  1. In order to download the plugin, go to Atlassian marketplace and download the after deployment plugin as below shown by visiting the above link provided

  1. The required plugin will be downloaded on to the default download directory into your local system
  1. After downloading the plugin go to Bamboo dashboard and click on BAMBOO ADMINISTRATON tab and click on Add-ons from the list as shown below

  1. After doing so, it will be redirected to Manage add-ons page, here click on Upload add-on option as shown below.

  1. It will pop up a separate window ‘Upload add-on’, where you can provide the downloaded plugin location to choose the Bamboo After Deployment Trigger plugin’ and click on Upload as shown below.

  1. Now navigate to your job configuration page and click on Triggers tab and then on Add trigger as shown below.

  1. After clicking on Add trigger button under Repository Polling section, a select trigger popup will appear, here select After deployment
  2. It will display Trigger configuration section on the right hand side in the same page as shown below, here choose the Deployment Project from the list and click on Save trigger as shown below.

By following the above mentioned process you can successfully configure a trigger to run deployment project as soon as the build plan is completed.

TIBCO To MuleSoft Migration; Middleware; Tibco; MuleSoft; Integrations

TIBCO To MuleSoft Migration Strategy

In the current era of IT market, the enterprises are mostly focusing on the innovative methods in order to automate various tasks associated with smooth migration from legacy to cloud enablement. It includes a migration framework and methodologies to follow in order to accomplish the migration tasks systematically.

In this article it is going to provide a little overview on the middleware migrations on how to migrate a particular TIBCO 6.x stack to MuleSoft 3.8 in phases. It explains about the risks associated to be considered while doing the complete migration. Below is the mapping from TIBCO to MuleSoft.

In order to migrate the business applications from TIBCO to Mule, perform the high level approach in phases, segregate the business applications into phases instead directly migrating it.

  1. Identify & Plan:
  • Assess the current system and list out all the available Services, Mappings, Message flows, total pallets involved, and assessing the configuration and environment of the platform
  • Identify the core patterns implemented in the existing TIBCO framework
  • Group the patterns by complexity i.e. simple, medium, complex, and its impact on business
  • Perform a POC first instead migrating the whole framework
  1. Challenges
  • Identify the Architectural approaches that both TIBCO and MuleSoft follows and implements
  • Identify unique connector components are implemented for each pair of interfaces in a point-to-point integration model
  • Rebuilding of interfaces might be required because of the fundamental differences in approach while migrating the legacy resources
  • Identify the common changes that can includes the endpoint URL’s, Credentials, Schema Definitions, Authentications, Certificates and IP/Hosts
  • The migration cutover should be handled carefully with the existing interfaces and avoid disruptions in production
  1. Migrate

Migrate the interfaces from TIBCO to MuleSoft framework in multiple phases without disrupting the business in production and its customers. There are three phases of this migration: Co-exist, Migrate, Rewrite

Phase I: Co-exist strategy

  • The first phase of the migration involves integrating with Mule ESB without rewriting TIBCO. By leveraging endpoints such as JMS, SOAP, File and S/FTP
  • You can add incremental features and new interfaces using Mule ESB, while keeping the existing TIBCO integrations intact that are already in production.

Phase II: Neutralize TIBCO runtime and keep IDE

  • Migrate the Resource definitions and transform the business logic. Host-it as is in Mule ESB with the Java message processor
  • Use this phase to migrate from TIBCO’s production and go live with Mule ESB as the runtime for legacy TIBCO integrations. You can use TIBCO IDE during the migration effort but not during the runtime. Use Mule ESB or CloudHub as the runtime instead of the TIBCO Integration Server

Phase III: Rewrite in Mule ESB and complete the migration

  • Create Mule Configuration XML files, Re-implement the business work flows using Mule ESB for business logic, Configure MuleSoft connection points, and MuleSoft graphical data mapper instead of TIBCO graphical mapper. Neither TIBCO runtime nor IDE are required from here onwards.
  1. Build and Deploy
  • Build the migrated Mule ESB code through Anypoint studio or outside of it and deploy it onto a Standalone Server or MMC Server or onto Cloud Hub depending on the business needs
  1. Functional Testing
  • Establish and test connectivity to application endpoints as soon as possible to work through network access issues
  • Perform end-to-end functional system testing
  • Coordinate testing with end users and client applications
  • Perform load testing including high availability cluster
  • And backup/disaster recovery failover testing

Snippet: Technical mapping in TIBCO and MuleSoft

Migration Potential Benefits
  • Minimizes the development efforts for building the application flow from scratch in API integration platform.
  • Saves migration efforts and cost, also client benefit from Anypoint Platform
  • Unified connectivity connect faster with the only integration platform for SOA, SaaS and APIs agile enough for any use case from simply extending a legacy system with a lightweight API, to advanced Service Oriented Architecture re-platforming for connectivity
  • Reduces development time drastically with speed and agility of a flexible environment
  • Pre-built components for standards-based architecture, and developer-friendly open source tools
  • Future-proof upgrade SOA frameworks with the flexibility to easily adapt and adopt future technology, integrate heterogeneous systems on-premises or in the cloud

Middleware Migrations

The era of Middleware evolution has been started over a decade ago and providing best software solutions to ease the integration burdens with simplified solutions by linking up the systems. Middleware can create many problems as many it solves, if care is not taken to adopt new emerging Middleware tools. There are two foremost things reforming in the middleware space, upgradation and migration, the latest trends in order to meet the customer new requirements. It is consistently growing both in the field of Commercial Enterprise and in Open-Source and distending further on producing business-enabling middleware platforms.

Migration is playing a predominant role with the continuous changes in the emerging technologies, infrastructural changes and facilitating the frameworks. Adopting the technology has been became an ongoing process in any business there by the need of Upgradation & Migration is being felt across extensively.

Re-Consider Migration

Enterprises (or) Customers has to re-consider their integration suites to migrate their legacy integrations to new framework if they fall under in any of the following things or in sync with the market trends:

§  Most of the integration suites are legacy ones, where there is a lack of support and existence which have been struggling with for years

§  Minimal scope to integrate disparate systems with existing integration suites

§  Architectural changes in the frameworks

§  Non-supported adapters in order to participate in the digital transformation era

§  No API management platform that easily connects or exposes REST API’s

§  No proliferation and adoption of SaaS and PaaS option

§  High licensing cost

§  Less ROI (Return on Investment)

 

Real Migration

In today’s market most of the enterprises are claiming themselves and bluffing customers making them to believe that migration is a single step process from legacy to cloud enablement. Is that real? Is there any tool or framework in the market which does straight migration with single step process?

Migration is having a different meaning what really the enterprises are talking about? It is not just doing Export & Import from one framework to another framework and making the businesses to run. It is to understand the current existing system design architecture, analyzing the bottleneck issues and implementing in a new framework to overcome all these issues and enabling features for future challenges to adopt.

Some of the proven migrations from same corporation:  

1.    Full & Successful migration tools:

§  ICAN 5.x to JCAPS 5.x

§  Oracle Fusion 10g SOA to Oracle Fusion 11g/12c SOA

2.    Partial Migration:

§  JCAPS 5.x To Oracle Fusion 11g/12c (old code with artifacts has been packaged to run in a new container)

§  Tibco 5.x To Tibco 6.x (few components has been altered as per the new architecture)

§  Tibco migration tool migrates/saves 40 to 60% coding efforts from 5.x To 6.x

 

Identify Right Tools for Migration

Gartner is one of the leading research and advisory where they forecast the latest and future technologies, considering your requirement short out the vendor list based on the following criteria:

§  Identify the tools that suites your enterprise needs

§  Identify the efforts that is required to setup and ease of use

§  Easy tool navigation with top-down approach model to provide the complete business visibility and performance

§  Training & User community

 

Designing Migration Path:

Plan the migration path which can streamline the current legacy systems into reusable and facilitate the integration for better changes in the products and services that has to able to adopt the new features in the future:

§  Analyze the design & architectural approach of existing interfaces

§  Identify the interfaces and group them by pattern

§  Interfaces where there is no change in Architecture implementation(File/FTP to File/FTP)

§  Identify the list of interfaces which were developed using Custom methods due to non-availability of adapters in legacy versions

§  Identify the new features in the tool (Annotations, Configurations, Exception Handling and so on)

§  Identify new tool supports grouping up similar interfaces based on customized framework

§  How easy to scale up/down the environment without downtime

§  Identify the set of security support

§  Support with external tools for Analytics

Customers has to focus to migrate their legacy integrations on to the new framework tools and look for the innovative focused enterprises in order to automate the tasks with smooth migration from one middleware to another one. The migration solution offering should ensure various benefits with the robust migration solutions.

Migrating TIBCO Interfaces to MuleSoft with Database and Salesforce Connector