Boomi Common Logging & Error Handling Framework

bCLE (Boomi Common Logging & Error Handling Framework)

EAIESB is happy to announce bCLE framework to all Boomi Customers. If you are Dell Boomi Customer, Please register with us bcle@eaiesb.com

We will implement bCLE free of cost. Demos will be available starting from August. Stay tuned

 

Migrating TIBCO B2B/EDI (Business Connect & Business Works) interfaces to Dell Boomi B2B (Trading Partner) EDI (X12/4010 824 – Application Advice)

How to integrate Spark with Apache Cassandra

How to integrate Spark with Apache Cassandra?

What is Apache Cassandra?

Apache Cassandra is a free and open-source distributed NoSQL database for handling large amounts of structured data through many commodity servers, and provides high available service with no single point of failure. Its supports replication and multi data center replication, scalability, fault-tolerant, tunable consistency, MapReduce support and query language. NoSQL databases are increasingly used in big data and real-time web applications
Code to integrate spark with Apache Cassandra:

Below is the code which will make you to connect with Apache Cassandra.

val conf = new SparkConf()

conf.set(“spark.cassandra.connection.host”, “10.0.0.00<provide your host id>”)

conf.set(“spark.cassandra.auth.username”, “Hadoop< provide your username>”)

conf.set(“spark.cassandra.auth.password”, “Hadoop< provide your Password>”)

conf.setMaster(“local[*]”)

conf.setAppName(“CassandraIntegration”)

Get acknowledgement by using the below code

print (“Connection created with Cassandra”)

Create sample data in Apache Cassandra and retrieve using Scala code into Spark:

Here I have installed Apache Cassandra in Linux system so am logging into to Cassandra with the following command.

“cqlsh 10.0.0.27 –u Hadoop –p Hadoop” this command is a combination of hostname, username and a password.

Run the following command to create Keyspace called “employeeDetails”

CREATE KEYSPACE employeeDetails WITH replication = {‘class’:’SimpleStrategy’, ‘replication_factor’ : 1};

To use the keyspace and to create a table in the Cassandra run the following commands.

Use employeeDetails;

CREATE TABLE employeeData(EmpID text PRIMARY KEY,EmpName text,EmpAddress text);

To insert data into employeeData, run the following commands

INSERT INTO employeeData(EmpID,EmpName,EmpAddress ) VALUES ( ‘E121′,’Govardhan’,’Hyderabad’);

Now to read the inserted data, use the following command

Select * from employeeDetails;

Now retrieve this data in spark by executing the following code in spark eclipse.

Note: In order to get Cassandra connection successfully, you need to add spark-cassandra-connector_2.11-2.0.1 jar into eclipse.

 

import org.apache.spark.SparkConf

import com.datastax.spark.connector._

import org.apache.spark._

import org.apache.spark.SparkContext._

import org.apache.log4j._

 

object Cassandra {

def main(args: Array[String]) {

// To print errors only

Logger.getLogger(“org”).setLevel(Level.ERROR)

// creating cassandra connection

val conf = new SparkConf()

conf.set(“spark.cassandra.connection.host”, “10.0.0.00<provide your host id>”)

conf.set(“spark.cassandra.auth.username”, “Hadoop< provide your username>”)

conf.set(“spark.cassandra.auth.password”, “Hadoop< provide your Password>”)

conf.setMaster(“local[*]”)

conf.setAppName(“CassandraIntegration”)

println(“Connection created with Cassandra”)

val sc =new SparkContext(conf)

val rdd = sc.cassandraTable(“employeedetails”, “employeedata”)

rdd.foreach(println)

}

}

Here you can see the output in the console.

This is how you can integrate Spark with Cassandra.

 

How to integrate Spark with Oracle DB 12c

How to integrate Spark with Oracle DB 12c?

What is Oracle DB 12c?

Oracle Database 12c is a high-performance, enterprise-class database and it is the first database designed for the cloud. It introduced new features like pluggable databases and multitenant architecture. It also features the Oracle Database 12c In-Memory, an optional add-on that provides in-memory capabilities. This in-memory option makes Oracle Database 12c the first Oracle database to offer real-time analytics.

Code to integrate spark with Oracle 12c:

Below is the code which will make you to connect with oracle 12c.

val dataframe_oracle = spark.read.format(“jdbc”)

.option(“url”, “jdbc:oracle:thin:system/<your DB password>@//localhost:1521/orcl12c”)

.option(“driver”, “oracle.jdbc.driver.OracleDriver”)

.option(“dbtable”, “system.employee<your table name>”)

.load()

Get acknowledgement by using the below code

print (“Connection created with HBase”)

Create sample data in Oracle 12c and retrieve using Scala code into Spark:

Open oracle12c SQL Shell and login with your credentials.

Run the following command to create table called “books”

CREATE TABLE system.books (

book_id VARCHAR2(20),

title VARCHAR2(50));

To insert data into employee, run the following commands

INSERT INTO system.books(book_id,title)VALUES(1021,’Oracle12c’);

INSERT INTO system.books(book_id,title)VALUES(1021,’SparkBasics’);

Now to read the inserted data using the following command

Select * from books;

Then run “commit” command to save the records into table.

Now retrieve this data in spark by executing the following code in spark eclipse.

Note: In order to get oracle12c connection successfully, you need to add ojdbc7 jar into eclipse.

 import org.apache.spark.sql._

import org.apache.spark._

import org.apache.spark.SparkContext._

import org.apache.log4j._

import java.sql.DriverManager

import java.sql.Connection

 

object Oracle {

   def main(args: Array[String]): Unit = {

  Logger.getLogger(“org”).setLevel(Level.ERROR)

    val spark = SparkSession

     .builder()

     .appName(“JDBC”)

     .master(“local[*]”)

     .config(“spark.sql.warehouse.dir”, “C:/Exp/”)

     .getOrCreate();   

   val dataframe_oracle = spark.read.format(“jdbc”)

                         .option(“url”, “jdbc:oracle:thin:system/<your Password here>@//localhost:1521/orcl12c”)

                         .option(“driver”, “oracle.jdbc.driver.OracleDriver”)

                         .option(“dbtable”, “system.books”)

                         .load()     

    import spark.implicits._   

    val employee = dataframe_oracle.toDF()   

    employee.printSchema()   

    employee.createOrReplaceTempView(“books”)

    val results = spark.sql(“SELECT * from books”).collect()

    results.foreach(println)

}  

}

Here you can see the output in the console.

 

This is how you can integrate Spark with Oracle DB 12c.

 

How to integrate spark with HBase and get sample data from HBase

  • What is HBase?

HBase is an open source, non-relational (NoSQL) database that runs on top of HDFS (Hadoop Distributed File System) and which provides real-time read/write access to your Big Data. It can store very large tables i.e. billions of rows N millions of columns. One can store the data in HDFS either directly or through HBase and it provides fast lookups for larger tables.

  • Code to integrate spark with HBase:

Below is the code which will make you to connect with HBase.

val conf = HBaseConfiguration.create()
conf.set(“hbase.zookeeper.quorum”, “10.0.x.xx(Provide your ip address where it is installed i.e. local or server)”)
conf.set(“hbase.zookeeper.property.clientPort”,”2181″);

Get acknowledgement by using the below code

print (“Connection created with HBase”)

  • Create sample data in Hbase and retrieve using Scala code into Spark:

Open HBase shell by using the following command “hbase shell”

Run the following command to create table called “employee”
create ’employee’, ’emp personal data’, ’emp professional data’

To insert data into employee, run the following commands

put ’employee’,’1′,’emp personal data:name’,’govardhan’
put ’employee’,’1′,’emp personal data:city’,’kurnool’
put ’employee’,’1′,’emp professional data:designation’,’manager’
put ’employee’,’1′,’emp professional data:salary’,’30000′

Now to read the inserted data use the following command
Scan ‘employee’

Now retrieve this data in spark by using the following code in spark eclipse.

Note: in order to get HBase connection successfully, you need to add the same version of jars into the eclipse, for example if you have installed HBase 1.2.5 then you need to add HBase 1.2.5 jar files into eclipse.

package com.hbase

import org.apache.hadoop.conf.Configuration

import org.apache.hadoop.hbase.util.HBaseConfTool

import org.apache.hadoop.hbase.HBaseConfiguration

import org.apache.hadoop.hbase.client.Connection

import org.apache.hadoop.hbase.client.ConnectionFactory

import org.apache.hadoop.hbase.TableName

import org.apache.hadoop.hbase.util.Bytes

import org.apache.hadoop.hbase.client.Put

import org.apache.hadoop.hbase.client.Get

object tessst {

  val conf: Configuration = HBaseConfiguration.create()

  def main(args: Array[String]): Unit = {

    conf.set(“hbase.zookeeper.quorum”, “10.0.x.xx (Provide your ip address where it is installed i.e. local or server)”)“)

    conf.set(“hbase.zookeeper.property.clientPort”, “2181”)

    val connection: Connection = ConnectionFactory.createConnection(conf)

      for (rowKey <- 1 to 1) {

      val result = table.get(new Get(Bytes.toBytes(rowKey.toString())))

      val nameDetails = result.getValue(Bytes.toBytes(“emp personal data”), Bytes.toBytes(“name”))

      val cityDetails = result.getValue(Bytes.toBytes(“emp personal data”), Bytes.toBytes(“city”))

      val designationDetails = result.getValue(Bytes.toBytes(“emp professional data”), Bytes.toBytes(“designation”))

      val salaryDetails = result.getValue(Bytes.toBytes(“emp professional data”), Bytes.toBytes(“salary”))    

      val name = Bytes.toString(nameDetails)

      val city = Bytes.toString(cityDetails)

      val designation = Bytes.toString(designationDetails)

      val salary = Bytes.toString(salaryDetails)

      println(“Name is ” + name + “, city ” + city + “, Designation ” + designation + “, Salary ” + salary)

    }

  }

}

Now execute the code to retrieve the data from HBase.

 

This is how you can integrate Spark with HBase.

Enabling Auto Deploy Trigger functionality in Bamboo

Enabling Auto Deploy Trigger functionality in Bamboo

Here I came across a requirement where to auto trigger deployment of the job after proper build in Bamboo, but unfortunately there is no option or feature available in Bamboo for auto deployment job.

By default, below are the only two separate features available in Bamboo:

  1. Build
  2. Deploy

By using these options the Administrator can perform any of above one action at once, either Build or Deploy. But it cannot be possible for auto trigger deployment of job as soon as build job is completed.

But, after my thorough research, I have identified this auto-trigger deployment of the job can be achievable through custom plugin ‘Bamboo After Deployment Trigger plugin’ which can be available from Atlassian Market Place for download.

Providing the link below for your reference to download the required plugin:

https://marketplace.atlassian.com/plugins/com.atlassianlab.Bamboo.plugins.Bamboo-after-deployment-trigger-plugin/server/overview

 Solution:
  1. In order to download the plugin, go to Atlassian marketplace and download the after deployment plugin as below shown by visiting the above link provided

  1. The required plugin will be downloaded on to the default download directory into your local system
  1. After downloading the plugin go to Bamboo dashboard and click on BAMBOO ADMINISTRATON tab and click on Add-ons from the list as shown below

  1. After doing so, it will be redirected to Manage add-ons page, here click on Upload add-on option as shown below.

  1. It will pop up a separate window ‘Upload add-on’, where you can provide the downloaded plugin location to choose the Bamboo After Deployment Trigger plugin’ and click on Upload as shown below.

  1. Now navigate to your job configuration page and click on Triggers tab and then on Add trigger as shown below.

  1. After clicking on Add trigger button under Repository Polling section, a select trigger popup will appear, here select After deployment
  2. It will display Trigger configuration section on the right hand side in the same page as shown below, here choose the Deployment Project from the list and click on Save trigger as shown below.

By following the above mentioned process you can successfully configure a trigger to run deployment project as soon as the build plan is completed.

TIBCO To MuleSoft Migration; Middleware; Tibco; MuleSoft; Integrations

TIBCO To MuleSoft Migration Strategy

In the current era of IT market, the enterprises are mostly focusing on the innovative methods in order to automate various tasks associated with smooth migration from legacy to cloud enablement. It includes a migration framework and methodologies to follow in order to accomplish the migration tasks systematically.

In this article it is going to provide a little overview on the middleware migrations on how to migrate a particular TIBCO 6.x stack to MuleSoft 3.8 in phases. It explains about the risks associated to be considered while doing the complete migration. Below is the mapping from TIBCO to MuleSoft.

In order to migrate the business applications from TIBCO to Mule, perform the high level approach in phases, segregate the business applications into phases instead directly migrating it.

  1. Identify & Plan:
  • Assess the current system and list out all the available Services, Mappings, Message flows, total pallets involved, and assessing the configuration and environment of the platform
  • Identify the core patterns implemented in the existing TIBCO framework
  • Group the patterns by complexity i.e. simple, medium, complex, and its impact on business
  • Perform a POC first instead migrating the whole framework
  1. Challenges
  • Identify the Architectural approaches that both TIBCO and MuleSoft follows and implements
  • Identify unique connector components are implemented for each pair of interfaces in a point-to-point integration model
  • Rebuilding of interfaces might be required because of the fundamental differences in approach while migrating the legacy resources
  • Identify the common changes that can includes the endpoint URL’s, Credentials, Schema Definitions, Authentications, Certificates and IP/Hosts
  • The migration cutover should be handled carefully with the existing interfaces and avoid disruptions in production
  1. Migrate

Migrate the interfaces from TIBCO to MuleSoft framework in multiple phases without disrupting the business in production and its customers. There are three phases of this migration: Co-exist, Migrate, Rewrite

Phase I: Co-exist strategy

  • The first phase of the migration involves integrating with Mule ESB without rewriting TIBCO. By leveraging endpoints such as JMS, SOAP, File and S/FTP
  • You can add incremental features and new interfaces using Mule ESB, while keeping the existing TIBCO integrations intact that are already in production.

Phase II: Neutralize TIBCO runtime and keep IDE

  • Migrate the Resource definitions and transform the business logic. Host-it as is in Mule ESB with the Java message processor
  • Use this phase to migrate from TIBCO’s production and go live with Mule ESB as the runtime for legacy TIBCO integrations. You can use TIBCO IDE during the migration effort but not during the runtime. Use Mule ESB or CloudHub as the runtime instead of the TIBCO Integration Server

Phase III: Rewrite in Mule ESB and complete the migration

  • Create Mule Configuration XML files, Re-implement the business work flows using Mule ESB for business logic, Configure MuleSoft connection points, and MuleSoft graphical data mapper instead of TIBCO graphical mapper. Neither TIBCO runtime nor IDE are required from here onwards.
  1. Build and Deploy
  • Build the migrated Mule ESB code through Anypoint studio or outside of it and deploy it onto a Standalone Server or MMC Server or onto Cloud Hub depending on the business needs
  1. Functional Testing
  • Establish and test connectivity to application endpoints as soon as possible to work through network access issues
  • Perform end-to-end functional system testing
  • Coordinate testing with end users and client applications
  • Perform load testing including high availability cluster
  • And backup/disaster recovery failover testing

Snippet: Technical mapping in TIBCO and MuleSoft

Migration Potential Benefits
  • Minimizes the development efforts for building the application flow from scratch in API integration platform.
  • Saves migration efforts and cost, also client benefit from Anypoint Platform
  • Unified connectivity connect faster with the only integration platform for SOA, SaaS and APIs agile enough for any use case from simply extending a legacy system with a lightweight API, to advanced Service Oriented Architecture re-platforming for connectivity
  • Reduces development time drastically with speed and agility of a flexible environment
  • Pre-built components for standards-based architecture, and developer-friendly open source tools
  • Future-proof upgrade SOA frameworks with the flexibility to easily adapt and adopt future technology, integrate heterogeneous systems on-premises or in the cloud

Middleware Migrations

The era of Middleware evolution has been started over a decade ago and providing best software solutions to ease the integration burdens with simplified solutions by linking up the systems. Middleware can create many problems as many it solves, if care is not taken to adopt new emerging Middleware tools. There are two foremost things reforming in the middleware space, upgradation and migration, the latest trends in order to meet the customer new requirements. It is consistently growing both in the field of Commercial Enterprise and in Open-Source and distending further on producing business-enabling middleware platforms.

Migration is playing a predominant role with the continuous changes in the emerging technologies, infrastructural changes and facilitating the frameworks. Adopting the technology has been became an ongoing process in any business there by the need of Upgradation & Migration is being felt across extensively.

Re-Consider Migration

Enterprises (or) Customers has to re-consider their integration suites to migrate their legacy integrations to new framework if they fall under in any of the following things or in sync with the market trends:

§  Most of the integration suites are legacy ones, where there is a lack of support and existence which have been struggling with for years

§  Minimal scope to integrate disparate systems with existing integration suites

§  Architectural changes in the frameworks

§  Non-supported adapters in order to participate in the digital transformation era

§  No API management platform that easily connects or exposes REST API’s

§  No proliferation and adoption of SaaS and PaaS option

§  High licensing cost

§  Less ROI (Return on Investment)

 

Real Migration

In today’s market most of the enterprises are claiming themselves and bluffing customers making them to believe that migration is a single step process from legacy to cloud enablement. Is that real? Is there any tool or framework in the market which does straight migration with single step process?

Migration is having a different meaning what really the enterprises are talking about? It is not just doing Export & Import from one framework to another framework and making the businesses to run. It is to understand the current existing system design architecture, analyzing the bottleneck issues and implementing in a new framework to overcome all these issues and enabling features for future challenges to adopt.

Some of the proven migrations from same corporation:  

1.    Full & Successful migration tools:

§  ICAN 5.x to JCAPS 5.x

§  Oracle Fusion 10g SOA to Oracle Fusion 11g/12c SOA

2.    Partial Migration:

§  JCAPS 5.x To Oracle Fusion 11g/12c (old code with artifacts has been packaged to run in a new container)

§  Tibco 5.x To Tibco 6.x (few components has been altered as per the new architecture)

§  Tibco migration tool migrates/saves 40 to 60% coding efforts from 5.x To 6.x

 

Identify Right Tools for Migration

Gartner is one of the leading research and advisory where they forecast the latest and future technologies, considering your requirement short out the vendor list based on the following criteria:

§  Identify the tools that suites your enterprise needs

§  Identify the efforts that is required to setup and ease of use

§  Easy tool navigation with top-down approach model to provide the complete business visibility and performance

§  Training & User community

 

Designing Migration Path:

Plan the migration path which can streamline the current legacy systems into reusable and facilitate the integration for better changes in the products and services that has to able to adopt the new features in the future:

§  Analyze the design & architectural approach of existing interfaces

§  Identify the interfaces and group them by pattern

§  Interfaces where there is no change in Architecture implementation(File/FTP to File/FTP)

§  Identify the list of interfaces which were developed using Custom methods due to non-availability of adapters in legacy versions

§  Identify the new features in the tool (Annotations, Configurations, Exception Handling and so on)

§  Identify new tool supports grouping up similar interfaces based on customized framework

§  How easy to scale up/down the environment without downtime

§  Identify the set of security support

§  Support with external tools for Analytics

Customers has to focus to migrate their legacy integrations on to the new framework tools and look for the innovative focused enterprises in order to automate the tasks with smooth migration from one middleware to another one. The migration solution offering should ensure various benefits with the robust migration solutions.

Migrating TIBCO Interfaces to MuleSoft with Database and Salesforce Connector

STATISTICAL INFERENCE ON THE POPULATION DATA TO FIND THE CRITICAL REGION USING Z-STATISTIC TEST

Inference means making an assumption of outcome in the population while performing analysis on sample. So this assumption has to be tested before coming to the conclusion and this is called hypothesis testing. I have to warn you that this is a complete theoretical concept of helping you understand the hypothesis testing with simple examples. Before you read this blog, make sure you understand the basics of statistics properly like mean, standard deviation, variance, normal distribution. Once you have good knowledge on this topics, you can get into this to understand the statistical inference using hypothesis testing.

There are two types of hypothesis testing

1. Null hypothesis

2.Alternate hypothesis

Hypothesis testing is always about the population data. Hypothesis testing always develops two statements, one is null and another is alternate in order to make business decisions. This statement or expression is created by using the mean of the sample data. We make decisions on the basis of performance produced by the population. It is done by taking the sample data of the population and perform analysis in order to accept or reject the population.

When it comes to null hypothesis, it explains about the claim made by the organization or customer and its comparison with the results obtained from the observations of the sample data of the product. it is represented by Ho which explains the statement given by the organization. When it comes to alternate hypothesis, it challenges the null hypothesis against the claim and works on proving that claim is wrong as per the hypothesis. It is also used to help companies understand the productivity of the model they have created.

For example, Null hypothesis is denoted as Ho and suppose if  we claim that the mean of the sample is equal to 100 then it is can written as

Ho: µ = 100

and then the alternate hypothesis challenges the above statement saying that

Ha: µ! = 100

 Ha: µ< 100

Ha: µ> 100

So here not equal to 100 actually means it could be< 100 or >100.so as per real time scenario, the two possibilities we consider is less than 100 and greater than 100 as alternate hypothesis.

Suppose if we take an example of Ford Truck Company which has redesigned its car F150 to reduce the noise issues and in its advertising it has claimed that the truck is now quieter. The average noise of the truck was 68 decibels at 60 mph which is actually a heavy noise as per the market standards.

So we need to understand that, if the average noise of the truck is 68, then the claim made by the organization is false, so in order to justify their statement, the average noise has to be less than 68.So now we take a sample out of the population which is trucks and perform hypothesis testing.So when we start hypothesis testing observations shown by the company is considered as null hypothesis. Understanding the fact that average noise of the truck is 68 decibels is heavy, there is a possibility that the noise could be more than 68 too. During analysis, if the null hypothesis turns out to be true, then the company has to stop the production.So the null hypothesis is written as

Ho: u> = 68

So the alternate hypothesis is written as

Ho:  u<68

Which actually challenges the null hypothesis in order to prove that the noise is lower than 68 decibels and the statement given by the company that the noise is quieter is true and production can be continued. Alternate only takes either greater than or less than the given value. It does not take equal to value based on real time scenarios in order to provide justifying solution.

If the null hypothesis is rejected the ford company has to enough evidence to support that the company producing the trucks with reduced noise. This is how the decision making is performed on the basis of hypothesis testing. In order to perform the hypothesis testing, the sample data taken from the population has to be normally distributed in the normal distribution graph which actually develops a bell curve. In this curve you need to find the exact region which falls under alternate hypothesis. To find that, critical region is introduced which is the region is the region of alternate hypothesis. If we find our analysis value under the critical region, it means the company has made a successful product since the average noise of trucks will be less than 68 decibels.

The Region of alternate can be either right tailed or left tailed or sometimes both.

So if for example, when Ho: µ<=2 is null hypothesis, then Ha: µ>2 is alternate hypothesis, which is a single tail since you have only one statement for analysis and when µ <2, the critical region falls on the left side of the mean and it is called left single tailed or lower tailed.

The region taken by the critical is the region of alternate which is either right or left tailed or sometimes both side tailed.

The  right tailed has the critical region on right where the arrow is indicating the hashed region as shown in figure above and  the remaining region will be taken as the region of null hypothesis Ho. when the alternate can take up two possibilities, then there will be two critical regions of alternate and the remaining region will be Ho

So what is the significance of this critical region and where this is used?

To understand this we need to follow the steps of hypothesis

  1. Formulate the hypothesis statements – null and alternate(mean/average)
  2. Take the sample data.
  3. Measure the sample for the mean.
  4. Use test statistic to do the hypothesis testing

We need to do the test statistic as there is a claim that population has some kind of outcome. To test that, we take a sample and perform operations of the sample but it does not mean we have the same outcome for population as per sample. It needs some analysis between this intervals from sample to population. That is actually done by using test statistic.

This test statistic consists of z test, t test, f test and chi square test.

Z Statistic test:

When we take a population and sample from it we need to infer about the population on the basis of sample.

If we know the standard deviation of population and the sample size you taken is more than 30 then the type of test statistic taken is Z test as per the standards. While performing hypothesis, there is always significance level with either 5% or 10% which indicates the percentage of error which is type 1 error that may occur while making decision. This value will be given by the company as per the standards. So this significance level is based on the type 1 error where the person rejects the null by making a mistake where it actually has to be accepted. This kind of errors is possible with the human involvement.

Once we calculate the value of z test by using sample mean, hypothesized mean, SD, and significance value, it is compared with the Z critical value.  The formula is shown below.

Once you find out the z statistic value, you need to find the value of Z critical region which is denoted as Z α by using (This formula is used in R studio).

The Z critical value is mostly given by the company as per standards followed and if not it can be found out by calculating with significance value.

The critical region in the normal distribution is the alternate hypothesis region. If the value of Z alpha is positive, then the region is plotted on the right hand side of the normal distribution. The initial point of critical region is found out by identifying the Z critical value and the remaining region is considered as the null hypothesis region. When Z statistic value is more than the Z critical region value then it comes under the alternate hypothesis. So we can easily reject null hypothesis.

Same applies for left tailed test where it may fall under the region of alternate hypothesis.

 

Z statistic:

1.We should know population standard deviation.

2. Sample taken should be more than 30

3. Formulate the hypothesis statements and check whether it is right tailed or left tailed.

4. Calculate z statistic.

5. We will be given z critical value or we can find out by using function   qnorm  and alpha value.

Rule of thumb:

Right tailed:

1. if z statistic > z critical, then reject null

2. if z statistic < z critical, then accept null

Left tailed:

1. if z statistic < z critical, then reject null

2. if z statistic > z critical, then accept null

Two tailed Test:

The critical value of z is divided into two parts since there are two alternate regions which rejects null hypothesis. Since the alpha is 0.05 and the z critical will be qnorm * ( 1 – alpha/2)

And it gives 1.96. So the 1.96 is on the right side of normal distribution and -1.96 on the left side of normal distribution. This is how the hypothesis testing can be performed by tracking the critical region and finding the Z statistic value in order to accept or reject the population.