Wednesday 19 October 2016

Why to use ENUM for singleton

ENUM for singleton

If you have been writing Singletons prior to Java 5 than you know that even with double checked locking you can have more than one instances. Though that issue is fixed with Java memory model improvement and guarantee provided by volatile variables from Java 5 onwards but it still tricky to write for many beginners.

Enum on the otherhand is just a cakewalk

Enum are also classes. So enum instances are static class fields and so are initialized as part of class loading when you 1st access the class.
Classloading is synchronized internally so that ensures enum instances are singletons (singletons within the same classloader, that is. if you have the same enum loaded by multiple loaders you will get multiple instances).

So yes If the same enum gets loaded by more than one Classloader (when classloading games are being played by, for example, a web app container), you will have multiple incompatible instances in memory.

But the million dollar question is WHY will someone do like that ??

An Example of Enum vs Doubly Checked way of creating singleton.

Using Enum

public enum Singleton{
    instance;
    public void showMe(){
    }
}

Using Double Check Singleton

    public class DoubleCheckSingleton {

    private volatile DoubleCheckSingleton singletonInstance;

    private DoubleCheckSingleton() {
    }

    public DoubleCheckSingleton getInstance() {
        if (singletonInstance == null) {
            synchronized (DoubleCheckSingleton.class) { /*double checking */
                if (singletonInstance == null) {
                    singletonInstance = new DoubleCheckSingleton();
                }
            }
        }
        return singletonInstance;
    }

    public void showMe() {
    }
}

Usage

using Enum ->

Singleton.instance.showMe();

Using DoubleCheckSingleton Implementation.

DoubleCheckSingleton.getInstance().showMe();

Tuesday 6 September 2016

Managing Transactions in cassanrdra

Locking is a complicated problem for distributed systems. It also usually leads to slow operations.
Implementing locks in Cassandra has been considered and decided against. You can see the history, conversation, and ultimate resolution in this Jira –
https://issues.apache.org/jira/browse/CASSANDRA-5062.
  1. There are some external options that allow to do some locking over cassandra (i.e. hector and others) but these are suboptimal solutions
We already know how to get linearizable consistency if we route all requests through a single master. In a fully distributed system, it is less obvious. As stated above early attempts in Cassandra tried to address this by wrapping a lock around sensitive operations, e.g. with the Cages library or with Hector’s native locks. But it did expose the edge cases, so cassandra thought something better.

So cassandra went ahead with Paxos

The Paxos consensus protocol allows a distributed system to agree on proposals with a quorum-based algorithm, with no masters required and without the problems of two-phase commit. There are two phases to Paxos: prepare/promise, and propose/accept.



Prepare/promise is the core of the algorithm. Any node may propose a value; we call that node the leader. (Note that many nodes may attempt to act as leaders simultaneously! This is not a “master” role.) The leader picks a ballot and sends it to the participating replicas. If the ballot is the highest a replica has seen, it promises to not accept any proposals associated with any earlier ballot. Along with that promise, it includes the most recent proposal it has already received.

If a majority of the nodes promise to accept the leader’s proposal, it may proceed to the actual proposal, but with the wrinkle that if a majority of replicas included an earlier proposal with their promise, then that is the value the leader must propose. Conceptually, if a leader interrupts an earlier leader, it must first finish that leader’s proposal before proceeding with its own, thus giving us our desired linearizable behavior.

Lightweight transactions in CQL

Lightweight transactions can be used for both INSERT and UPDATE statements, using the new IF clause. Here’s an example of registering a new user:
INSERT INTO USERS (login, email, name, login_count)
values ('jbellis', 'jbellis@datastax.com', 'Jonathan Ellis', 1)
IF NOT EXISTS
And an an example of resetting his password transactionally:
UPDATE users
SET reset_token = null, password = ‘newpassword’
WHERE login = ‘jbellis’
IF reset_token = ‘some-generated-reset-token’

Important Points about cassandra  LWT(Light weight Tansactions)
  1. LWTs are, by definition, slower than regular insert/update statements in Cassandra and are designed to be used for a minority of use cases. 1% of your workload.
  2. LWTs only work inside a partition. Cross partition inserts will not block LWTs.

Friday 2 September 2016

Think I will Buy Me a Football Team

Here is the solution for SPOJ ANARC08G.
Here instead of reading the data from the file we are reading from the console

Code:

package com.sandeep.spoj;

import java.util.Scanner;

public class ANARC08h {

    public static int[][] readFromConsole() {

        int matrix[][];
        int row, column;

        Scanner scan = new Scanner(System.in);

        System.out.println("Matrix Creation");

        System.out.println("\nEnter number of rows :");
        row = Integer.parseInt(scan.nextLine());

        System.out.println("Enter number of columns :");
        column = Integer.parseInt(scan.nextLine());

        matrix = new int[row][column];
        System.out.println("Enter the data :");

        for (int i = 0; i < row; i++) {

            for (int j = 0; j < column; j++) {

                matrix[i][j] = scan.nextInt();
            }
        }
        return matrix;
    }

    public static void main(String[] args) {

        int[][] input = readFromConsole();
        int[] creaditArray = new int[4];
        int[] debitArray = new int[4];
        for (int i = 0; i < 4; i++) {
            for (int j = 0; j < 4; j++) {
                creaditArray[i] = creaditArray[i] + input[i][j];
                debitArray[i] = debitArray[i] + input[j][i];
            }
        }
        int totalMoneyLeft = 0;
        int totalSum = 0;
        for (int i = 0; i < creaditArray.length; i++) {
            if ((creaditArray[i] - debitArray[i]) >= 0)
                totalMoneyLeft = totalMoneyLeft + (creaditArray[i] - debitArray[i]);
            totalSum = totalSum + creaditArray[i];
        }
        System.out.println(totalMoneyLeft);
        System.out.println(totalSum);

    }
}

Monday 22 August 2016

Akka Part 1

Why Akka ?

Most of us have done multithreading in the past won’t deny how hard and painful it is to manage multithreaded applications or concurrent applications.
Having to deal with threads, locks, race conditions, and so on is highly error-prone and can lead to code that is difficult to read, test, and maintain.
It aches when you see that you don’t have a easier way to recover from errors in your sub-tasks OR those zombie bugs that you find hard to reproduce OR when your profiler shows that your threads are spending a lot of time blocking wastefully before writing to a shared state.

What is the Akka Framework?

Akka is a toolkit and runtime for building highly concurrent, distributed, and fault tolerant applications on the JVM. Akka is written in Scala, with language bindings provided for both Scala and Java.
Akka’s approach to handling concurrency is based on the Actor Model

WHAT ARE ACTORS?

Treat Actors like Employee and HR Department,Employee sits in some remote location and HR at some other location and they both interact with mails.
So what all happens ??

MESSAGING :

So employee sends an email to HR form his INBOX and HR replies to the email from his INBOX.


CONCURRENCY

Now, imagine there are multiple employees sending requests to multiple HR departments to handle different type of requests
Nothing changes actually. Everybody has their own mailbox.


FAILOVER.

There can be chances when any HR is on sick leave or something, So there has to to some one who can handle his work for that day, So a member of the same team
steps up and replied to employees queries.
Points to note :
  1. There could be a pool of Actors who does different things.
  2. An Actor could do something that causes an exception. It wouldn’t be able to recover by itself. In which case a new Actor could be created in place of the old one. Alternatively, the Actor could just ignore that one particular message and proceed with the rest of the messages.

MULTITASKING

There is a possiblity that the same HR is involved in medical incurance and employee engagement, So the same HR can handle multiple type of requests.

CHAINING.

There is a possibility that you have sent a mails to HR, and all the HR’s are sending you a consolidated mail containing resolution of all your queries.
i.e we an make an hierarchy that will be followed by the HR team for replying to any query.
So lets here define the ACTOR model.
  1. So employee and HR become our ACTOR’s
  2. Mailbox can be considered as a queue from which data is going to be taken up and processed.
  3. Mail messages are immutable objects.
  4. There is a component that delivers
    message composed in the source inbox to the recipient inbox, lets
    call it, Message Dispatcher.

Sunday 21 August 2016

AKKA (Actor Messaging) -Part 2

Now lets discuss the messaging part of Actors:



Broadly these are explained in the following six steps when a message is passed to the actor:
  1. Employee creates something called an ActorSystem
  2. It uses the ActorSystem to create something called as ActorRef. The message(MSG) is sent to the ActorRef (a proxy to HR Actor)
  3. Actor ref passes the message along to a Message Dispatcher
  4. The Dispatcher enqueues the message in the target Actor’s MailBox.
  5. The Dispatcher then puts the Mailbox on a Thread (more on that in the next section).
  6. The MailBox dequeues a message and eventually delegates that to the actual HR Actor’s receive method.
Let’s look at each step in detail now. You can come back and revisit these five steps once we are done.

The EmployeeActor Program.

Lets consider this Employee Actor as the main program and lets cal it . EmployeeActorApp.
As we understand from the picture, the Employee Actor,
  1. Creates an ActorSystem
  2. Uses the ActorSystem to create a proxy to the HRActor (ActorRef)
  3. Sends the Message to the proxy.
Let’s focus on these three points alone now.

Creating an Actor System

ActorSystem is the entry point into the ActorWorld. ActorSystems are through which you could create and stop Actors. Or even shutdown the entire Actor environment.

On the other end of the spectrum, Actors are hierarchical and the ActorSystem is also similar to the java.lang.Object or scala.Any for all Actors - meaning, it is the root for all Actors. When you create an Actor using the ActorSystem’s actorOf method, you create an Actor just below the ActorSystem.

The code for initializing the ActorSystem looks like.
val system=ActorSystem("HrMessageingSystem")  
The HrMessageingSystem is simply a name you give to your ActorSystem.

Creating a Proxy for HR Actor

val hrActorRef:ActorRef = actorSystem.actorOf(Props[HRActor])
The actorOf is the Actor creation method in ActorSystem. But, as you can see, it doesn’t return a HrActor which we need. It returns something of type ActorRef.

The ActorRef acts as a Proxy for the actual Actors. The clients do not talk directly with the Actor. This is Actor Model’s way of avoiding direct access to any custom/private methods or variables in the HrActor or any Actor for that sake.

To repeat, you send messages only to the ActorRef and it eventually reaches your actual Actor. You can never talk to your Actor directly.

Send a Message to the Proxy

Now that we have an actor system and a reference to the actor, we would like to send requests to the HR actor reference. We send the message to an actor using the ! also called Tell.
//send a message to the HR Actor
 hrActorRef ! Message.
The Tell is also called as fire-forget. There is no acknowledgement returned from a Tell.
When the message is sent to an actor, the actor’s receive method will receive the message and processes it further. receive is a partial function and has the following signature:
def receive: PartialFunction[Any, Unit]
The return type receive suggests it is Unit and therefore this function is side effecting.

Here is the sample program for what we have discussed till now:
object EmployeeActorApp extends App{
 //Initialize the ActorSystem
  val actorSystem=ActorSystem("HrMessageingSystem")

 //construct the HR Actor Ref
  val hrActorRef=actorSystem.actorOf(Props[HrActor])

 //send a message to the HR Actor
  hrActorRef!Message

 //Let's wait for a couple of seconds before we shut down the    system
  Thread.sleep (2000) 

 //Shut down the ActorSystem.
  actorSystem.shutdown()

}  
You’ll have to shutdown the ActorSystem or otherwise, the JVM keeps running forever. And I am making the main thread sleep for a little while just to give the HrActor to finish off its task.

The Message

We just told a Message to the ActorRef but we didn’t see the message class at all!
object Message{
  case class Message()
  case class Message(someString:String)
}
As we know, the Message is for the requests that come to the HrActor. The Actor would respond back with a MessageResponse.

Dispatcher & A Mailbox

The ActorRef delegates the message handling functionality to the Dispatcher. Under the hood, while we created the ActorSystem and the ActorRef, a Dispatcher and a MailBox was created. Let’s see what they are about.

MailBox

Ever Actor has one MailBox . As Per our analogy, every HR has one mailbox too. The HR has to check the mailbox and process the message. In Actor world, it’s the other way round - the mailbox, when it gets a chance uses the Actor to accomplish its work.

Also the mailbox has a queue to store and process the messages in a FIFO fashion - a little different from our regular inbox where the most latest is the one at the top.

Dispatcher

Dispatcher does some really cool stuff. From the looks of it, the Dispatcher just gets the message from the ActorRef and passes it on to the MailBox. But there’s one amazing thing happening behind the scenes :

The Dispatcher wraps an ExecutorService (ForkJoinPool or ThreadPoolExecutor). It executes the MailBox against this ExecutorService.

Check out this code snippet from the Dispatcher:
protected[akka] override def registerForExecution(mbox: Mailbox, ...): Boolean = {  
    ...
    try {
        executorService execute mbox
    ...
}

HR Actor

The MailBox, when it gets its run method fired, dequeues a message from the message queue and passes it to the Actor for processing.

The method that eventually gets called when you tell a message to an ActorRef is the receive method of the target Actor.

The HrActor is a rudimentary class which has a List of quotes and obviously the receive method which handles the messages.

HRActor.scala :
class HRActor extends Actor {
def receive  = {
    case s: String if(s.equalsIgnoreCase(“SICK”)=> println("Sick Leave Applied”)
    case s: String if(s.equalsIgnoreCase(“PTO”) => println("PTO applied “)
    }
 }
The HRActor’s receive method pattern matches for the Messages passed - the Message (actually, it is a good practice to pattern match the default case but there’s an interesting story to tell there)

All that the receive method does is to,
  1. Pattern match for Message
  2. Check the message
  3. Check the type of the Message
  4. Process the message based on the type of message.

Wednesday 25 May 2016

Volatile vs Static


Many of us get confused with volatile and static varibale. Both of them maintain a single copy then why do we need a volatile variable static can do the thing.. But we cannot just compare static and volatile variable based on  the copies they store. The difference here is all about how they store (process they follow while storing).

Declaring a static variable in Java, means that there will be only one copy, no matter how many objects of the class are created. The variable will be accessible even with no Objects created at all. However, threads may have locally cached values of it.

When a variable is volatile and not static, there will be one variable for each Object. So, on the surface it seems there is no difference from a normal variable but totally different from static. However, even with Object fields, a thread may cache a variable value locally.

This means that if two threads update a variable of the same Object concurrently, and the variable is not declared volatile, there could be a case in which one of the thread has in cache an old value.
Even if you access a static value through multiple threads, each thread can have its local cached copy! To avoid this you can declare the variable as static volatile and this will force the thread to read each time the global value.


Example:

Static Variable:  If two Threads(suppose T1 and T2) are accessing the same object and updating a variable which is declared as static then it means T1 and T1 can make their own local copy of the same object(including static variables) in their respective cache, so update made by T1 to the static variable in its local cache wont reflect in the static variable for T1 cache .


Volatile variable: If two Threads(suppose T1 and T2) are accessing the same object and updating a variable which is declared as volatile then it means T1 and T2 can make their own local cache of the Object except the variable which is declared as a volatile . So the volatile variable will have only one main copy which will be updated by different threads and update made by one thread to the volatile variable will immediately reflect to the other Thread.


Here is a diagram for better explanation:







Monday 23 May 2016

Concurrentmodification Exception Explained.


In this post we will try to understand why concurrentmodification exception is thrown.

Yes concurrent modification exception is thrown when the structure of a collection is changed within its iterator without using iterator's remove method "But there is a catch, This blogs explains the same"

Now a code sample from the Iterator returned by ArrayList:
    public boolean hasNext() {
        return cursor != size;
    }

    public E next() {
        checkForComodification();
        <stuff>
        return <things>;
    }

    <more methods>

    final void checkForComodification() {
        if (modCount != expectedModCount)
            throw new ConcurrentModificationException();
    }
Now from this we can figure out that ConcurrentModificationException is thrown in the next() method not hasnext() method.

Take for example(removing element):

public static void main(String[] args) {

List<String> strList = new ArrayList<String>();
strList.add("Sandeep");
strList.add("Raju");

Iterator<String> listItr = strList.iterator();
int i=0;
while(listItr.hasNext()){
System.out.println(i++);
String str = listItr.next();
System.out.println(" String "+ str);
strList.remove(str);
}

}
Now as per above code "The structure of arraylist is getting changed inside the iterator " So it should throw ConcurrentModificationException but it will not. The output will be

Output:

Loop count 0
Name Sandeep

Explanation :

When the program starts it inserts 2 elements into the list. In the while loop of the iterator we print the the loop count, then we print the string value, and the loop removes the same string value. Now since the value has been removed from the list, When the control goes for the second round, the hasnext() will give you false and the control will not go inside the loop anymore. So no ConcurrentModificationException will be thrown as we didn't even reached the next method.


Now in the same case consider the following example (adding element)

public static void main(String[] args) {

List<String> strList = new ArrayList<String>();
strList.add("Sandeep");
strList.add("Raju");

Iterator<String> listItr = strList.iterator();
int i=0;
while(listItr.hasNext()){
System.out.println(i++);
String str = listItr.next();
System.out.println(" String "+ str);
strList.add(str);
}

}

Output:

Loop count 0
Name Sandeep
Loop count 1
Exception in thread "main" java.util.ConcurrentModificationException
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901)
at java.util.ArrayList$Itr.next(ArrayList.java:851)
at com.sandeep.java7.ConcModificationException.main(ConcModificationException.java:20)



Explanation :

When the program starts it inserts 2 elements into the list. In the while loop of the iterator we print the the loop count, then we print the string value, and the loop adds the same string value again. Now a new element has been added into the list, When the control goes for the second round, the hasnext() will give you true because we still have elements to be traversed  and the control will  go inside the loop. Now when the control reaches the next() method it comes to know that the structure of the collection has been modified hence it will trow the ConcurrentModificationException.





Wednesday 18 May 2016

Distributed Deletes in Cassandra



Cassandra cluster defines a ReplicationFactor that determines how many nodes each key and associated columns are written to. In Cassandra, the client controls how many replicas to block for on writes, which includes deletions. In particular, the client may, and typically will, specify a ConsistencyLevel of less than the cluster's ReplicationFactor, that is, the coordinating server node should report the write successful even if some replicas are down or otherwise not responsive to the write.

A delete operation can't just wipe out all traces of the data being removed immediately. if we did, and a replica did not receive the delete operation, when it becomes available again it will treat the replicas that did receive the delete as having missed a write update, and repair them! So, instead of wiping out data on delete, Cassandra replaces it with a special value called a tombstone. The tombstone can then be propagated to replicas that missed the initial remove request.

Tombstones exist for a period of time defined by gc_grace_seconds(Table property). After data is marked with a tombstone, the data is automatically removed during the normal compaction process.
Facts about deleted data to keep in mind are:
  • Cassandra does not immediately remove data marked for deletion from disk. 
  • The deletion occurs during compaction.If you use the sized-tiered or date-tiered compaction strategy, you can drop data immediately by manually starting the compaction process.
  •  Before doing so, understand the documented disadvantages of the process. A deleted column can reappear if you do not run node repair routinely.

Why deleted data can reappear

Marking data with a tombstone signals Cassandra to retry sending a delete request to a replica that was down at the time of delete. If the replica comes back up within the grace period of time, it eventually receives the delete request. However, if a node is down longer than the grace period, the node can miss the delete because the tombstone disappears after gc_grace_seconds. Cassandra always attempts to replay missed updates when the node comes back up again. After a failure, it is a best practice to run node repair to repair inconsistencies across all of the replicas when bringing a node back into the cluster. If the node doesn't come back within gc_grace,_seconds, remove the node, wipe it, and bootstrap it again.