Wednesday, 16 December 2015

Solr Master - Slave Configuration with DataImportHandler & Scheduling

In this post we will se how we can setup Solr Master - Slave replication setup as shown below -


For simplicity lets assume that we have two nodes node1 and node2. Node1 is the master node and Node2 is the slave node.

1. Install solr-5.3.1 on both Node1(master) and Node2(slave)
2. Create Solr core using the command
    $> bin/solr create [-c name] [-d confdir] [-n configName] [-shards #] [-replicationFactor #] [-p           port]
on both Node1 and Node2

Lets assume the name of the core is test_core.

So in both the instance if we go to ${SOLR_HOME}/server/solr we will see test_core which have conf directory , core.properties file and data directory.

Now lets start with master slave configuration -

Master Setup 

If we navigate to conf directory within the test-core directory under /server/solr we will see solrconfig.xml file

Edit the file and add

<requestHandler name="/replication" class="solr.ReplicationHandler">
    <lst name="master">
         <str name="enable">${master.replication.enabled:false}</str>
         <str name="replicateAfter">commit</str>
         <str name="replicateAfter">optimize</str>
        <str name="replicateAfter">startup</str>
    </lst>

</requestHandler>

add master.replication.enabled=true in core.properties file located in /solr directory.


Slave Setup

If we navigate to conf directory within the test-core directory under /server/solr we will see solrconfig.xml file

Edit the file and add

<requestHandler 
name="/replication" class="solr.ReplicationHandler">
     <lst name="slave">
           <str name="enable">${slave.replication.enabled:false}</str>
           <str name="masterUrl">http://${masterserver}/solr/${solr.core.name}/replication</str>
          <str name="pollInterval">00:05:00</str></lst>

</requestHandler>

add 


slave.replication.enabled=true
masterserver=52.33.134.44:8983


solr.core.name=<core_name> (test_core)

in core.properties file located in /solr directory.


Thats it we are done with master slave configuration.

DataImportHandler

Using solr DataImportHandler we can create indexes in solr directly from data store like MySQL Oracle, Postgre SQL etc.

Lets continue with previous example to configure a data import handler
1.  Edit solrconfig.xml file under conf directory of your core and add -

<requestHandler name="/dataimport"                           class="org.apache.solr.handler.dataimport.DataImportHandler">
  <lst name="defaults">
      <str name="config">data-config.xml</str>
  </lst>
</requestHandler>

2. Create data-config.xml file within the conf directory with following content-

<dataConfig>
<dataSource type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="" user="" password=""/>
    <document name="">
        <entity name="" query=""
deltaQuery="<some_date_condition> &gt; '${recommendation.last_index_time}';">
 <field column="" name="" />
            .
.
.
.
      <field column="allcash_total_annualized_return_growth" name="Allcash_total_annualized_return_growth" />
        </entity>
    </document>
</dataConfig>

3. Create corresponding filed mapping in managed-schema file for index creation.

4. Make sure you have the jar file for Driver class is available in lib directory or any other directory and you have mentioned that in solrconfig.xml file like

<lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" />

We are done with DataImportHandler configuration.

Scheduling: 

Solr by default don,t support scheduling for delta import.
Clone either of

1. https://github.com/badalb/solr-data-import-scheduler.git
2. https://github.com/mbonaci/solr-data-import-scheduler.git

Create a jar file and put that jar file in {SOLR_HOME}/ server/solr-webapp/ webapp/ WEB-INF / lib directory

3. Make sure, regardless of whether you have single or multi-core Solr, that you create dataimport.properties located in your solr.home/conf (NOT solr.home/core/conf) with the content like

 #  to sync or not to sync
#  1 - active; anything else - inactive
syncEnabled=1

#  which cores to schedule
#  in a multi-core environment you can decide which cores you want syncronized
#  leave empty or comment it out if using single-core deployment
syncCores=coreHr,coreEn

#  solr server name or IP address
#  [defaults to localhost if empty]
server=localhost

#  solr server port
#  [defaults to 80 if empty]
port=8080

#  application name/context
#  [defaults to current ServletContextListener's context (app) name]
webapp=solrTest_WEB

#  URL params [mandatory]
#  remainder of URL
params=/select?qt=/dataimport&command=delta-import&clean=false&commit=true

#  schedule interval
#  number of minutes between two runs
#  [defaults to 30 if empty]
interval=10

4. Add application listener to web.xml of solr web app ({SOLR_HOME}/ server/solr-webapp/WEB-INF/web.xml)

<listener>
  <listener-class>org.apache.solr.handler.dataimport.scheduler.ApplicationListener</listener-class>
</listener>

Restart Solr so that changes are reflected.

Happy searching .....

Tuesday, 15 December 2015

Integrating Tableau Desktop with Spark SQL

In this post we will see how we can integrate Tableau Desktop with Spark SQL. Tableau’s integration with Spark brings tremendous value to the Spark community – we can visually analyse data without writing a single line of Spark SQL code. That’s a big deal because creating a visual interface to our data expands the Spark technology beyond data scientists and data engineers to all business users. The Spark connector takes advantage of Tableau’s flexible connection architecture that gives customers the option to connect live and issue interactive queries, or use Tableau’s fast in-memory database engine.

Software requirements :-

We will be using the following softwares to do the integration -
1. Tableau Desktop-9-2-0
2. Hive 1.2.1
3. Spark 1.4.0 for Hadoop 2.6.0

We can skip the Hive and can directly work with Spark SQL. For this example we will use Hive, import Hive tables to Spark SQL and will Integrate them with Tableau SQL.

Hive Setup :-

1. Download and install Hive 1.2.1.
2. Download and copy mysql connector jar file to ${HIVE_HOME}/lib directory so hive will use           MySql metastore.
3. Start Hive ${HIVE_HOME}/bin $./hive
4. Create some table and insert data to that table

create table product(productid INT, productname STRING, proce FLOAT, category STRING) ROW FORMAT DELIMITED
        FIELDS TERMINATED BY ',';

INSERT INTO TABLE product VALUES(1,Book,25,Statonery);
INSERT INTO TABLE product VALUES(2,Pens,10,Stationery);
INSERT INTO TABLE product VALUES(3,Sugar,40.05,House Hold Item);
INSERT INTO TABLE product VALUES(4,Furniture,1200,Interiors);

Hive setup is complete now.

Spark Setup :-

1. Download and extract Spark 1.5.2 for Hadoop 2.6.0
2. Copy hive-site.xml from ${HIVE_HOME}/conf directory to ${SPARK_HOME}/conf directory
3. Replace all "s" from time values like 0s to 0 or <xyz>ms to <xyz> else it might give us Number         Format Exception
4. Define  SPARK MASTER IP export SPARK_MASTER_IP=<host_ip_addr>  in spark-env.sh file  (without this thrift server will not work) located at ${SPARK_HOME}/conf directory

5. Start spark master and slave
  1. ${SPARK_HOME}/sbin $./start-master.sh 
  2. ${SPARK_HOME}/sbin $./start-slaves.sh 
6. Goto http://localhost:8080/   and check that worker has started

Now time to start Thrift server -

7.  ${SPARK_HOME}/sbin $ ././start-thriftserver.sh --master spark://<spark_host_ip>:<port> --driver-class-path ../lib/mysql-connector-java-5.1.34.jar  --hiveconf hive.server2.thrift.bind.host localhost --hiveconf hive.server2.thrift.port 10001

It will start thrift server on 10001 port


8. Go to http://localhost:8080/  and check spark sql application has started








































Now go to Tableau Desktop
  1. Select Spark Sql.
  2. Enter host as localhost, enter thrift server port from step here its 10001
  3. Select type as SparkThriftServer, Authentication as User Name 
  4. Keep rest of the fields empty and click on OK
You are done!!! Happy report building using Tableau-Spark.




Monday, 19 October 2015

Vagrant - Puppet Java development environment setup


Vagrant:-

Vagrant is an open-source (MIT) tool for building and managing virtualised development environments

Simply put, Vagrant makes it really easy to work with virtual machines. According to the Vagrant docs:

"If you’re a designer, Vagrant will automatically set everything up that is required for that web app in order for you to focus on doing what you do best: design. Once a developer configures Vagrant, you don’t need to worry about how to get that app running ever again. No more bothering other developers to help you fix your environment so you can test designs. Just check out the code, vagrant up, and start designing."

Puppet:-

Puppet is a configuration management tool that is extremely powerful in deploying, configuring, managing, maintaining, a server machine.


Librarian Puppet:-

Librarian-puppet is a project by the amazing Tim Sharpe to take Librarian, a general reimplementation of Bundler, and provide an implementation for the Puppet ecosystem. It has support for installing Puppet modules from the Puppet Forge as well as Github, and provides any number of other features like version locking of installed modules.

Simply, we can have a virtual box and vagrant setup and we can write shell scripts/batch files to install softwares based on the development environment.

If we use puppet and librarian puppet along with with virtual box we only need to concentrate on setting up vagrant, puppet, librarian puppet rest will be taken care of by puppet module itself.

Virtual Box Setup

Download virtual box from here [https://www.virtualbox.org/wiki/Downloads] for the environment you are working on. Once downloaded follow the instructions to install

If we want to work with vagrant we must have a virtual box.

Vagrant Setup

With virtual box installed we are ready to go ahead with vagrant installation. 
Download vagrant from here [https://www.vagrantup.com/downloads.html]. Once downloaded follow the instructions to install.

Puppet Setup

You can write a environment specific shell script/batch file to install puppet manually or shell script could be executed from Vargrant file itself while executing the command  $ vagrant up For simplicity lets assume we will manually execute the shell script/batch file to install puppet.


Librarian Puppet Setup

You can write a environment specific shell script/batch file to install librarian puppet manually or shell script could be executed from Vargrant file itself while executing the command  $ vagrant up For simplicity lets assume we will manually execute the shell script/batch file to install librarian puppet.

Now we have a virual box, vagrant, puppet, librarian puppet installed.


Lets create our First Instance 

$ mkdir my_first_instance
$ cd my_first_instance
$ vagrant init precise32  http://files.vagrantup.com/precise32.box

once successfully executed it will create Vagrantfile in the empty directory created above with some default settings. Now execute 

$vagrant up

Wait for few minutes, this will start the virtual box [ubuntu machine]. Now using ssh we can interact with the virtual box

$vagrant ssh 

[
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic-pae i686)

 * Documentation:  https://help.ubuntu.com/
New release '14.04.3 LTS' available.
Run 'do-release-upgrade' to upgrade to it.

Welcome to your Vagrant-built virtual machine.
Last login: Fri Sep 14 06:22:31 2012 from 10.0.2.2
vagrant@precise32:~$  ]

$vagrant ssh exit  - command to exit virtual box

$vagrant suspend -  command to stop virtual machine

$vagrant destroycommand to remove the setup

Development Environment Setup

At this point we have a virtual box up and running, vagrant setup, puppet and librarian puppet installed.

$ cd my_first_instance
$ mkdir puppet
$ cd puppet
$ mkdir  manifests
$ mkdir modules
$ touch Puppetfile
$ cd manifests
$ touch default.pp

the Puppetfile will have the modules required for your development and default.pp file will have the dependencies.

Sample Puppetfile
[forge "http://forge.puppetlabs.com"

mod "puppetlabs/stdlib", "3.2.1"
mod "puppetlabs/apt", "1.5.0"
mod "puppetlabs/mysql", "2.2.3"
#mod "puppetlabs/rabbitmq", "5.0.0"
mod "thomasvandoren/redis", "0.10.0"
mod "jbussdieker/memcached"
mod "puppetlabs/git"
mod "tylerwalts/jdk_oracle"
mod "gini/gradle"]

Puppet modules could be found executing the command $ sudo puppet module search {mysql}
In default.pp file under manifests define the dependencies like -

# --- MySQL --- #

class { '::mysql::server':
 root_password => 'foo'
}

Once defined go to puppet directory under parent directory [$ cd my_first_instance/puppet] and execute 
$ sudo librarian-puppet install - this will install modules under puppet directory
$ cd ..
$ vagrant reload --provision

Once provisioning has been successfully completed it will install the software module to virtual box. We can now login to vm and start using it.

Sample virtual box setup scripts are available here [https://github.com/badalb/vagrant-java]


Tuesday, 9 June 2015

Real Time Data Streaming with Epoch

Streams of data are becoming ubiquitous today – clickstreams, log streams, event streams, and more. Building a clickstream monitoring system, for example, where data is in the form of a continuous clickstream rather than discrete data sets, requires the use of continuous processing rather than ad-hoc, one-time queries.

In this blogpost we will explore how we can build a real-time monitoring system with Spring framework, Kafka, Storm, Redis, Node.Js and EpochJS ( https://fastly.github.io/epoch/ ).

We will have a producer producing the stream data to a Kafka topic and a consumer, Storm spout consuming the stream and storm bolts publishing those streams to redis. A simple node application subscribed to redis continuously push consumed stream to epoch using the open socket. Epoch creates a realtime view of the stream to the end user. So the architecture looks like -



Implementation code code be found here -

Spring-Kafka :- https://github.com/badalb/spring-kafka.git
Real Time Streaming :- https://github.com/badalb/epoch-realtime-data-stream.git 

Monday, 8 June 2015

External Project Dependency in Gradle

Lets assume that you are working in a multi-module and distributed project and projects are organised like the structure below -

root1
       |__ project _1
       |__ project _2

root2
        |__ project _X
        |__ project _Y


Now we want to add dependency of project_1 of root1 in project_Y of root2, If you achieve that easily with -

1. Add the snippet below in settings.gradle file of project_Y

include ":project_1"
project(":project_1").projectDir = file("<path_to_project_1>")

2. Add dependency of project_1 in build.gradle file of project_Y

compile project(":project_1")

Wednesday, 22 April 2015

Multiple Authentication Schemes in Spring Security

While developing server side applications using spring framework sometimes we encounter situations where need to support web based clients (typically developed in Backbone.js or Angular.js or JSP based multi form applications), Mobile clients (Android, IOS etc). The RESTfull services exposed may have third party clients as well.


If we have a consumer facing web interface typically accessed by web browsers where we need to maintain a user session we have to end up having a form based authentication mechanism. Third party clients for B2B service consumption we can have token based security in place and mobile users security could be supported by OAuth2. Lets see how we could implement all three types of security mechanisms in a web application using Spring Security.


Security Configuration files for REST and Form based security :-

@Configuration
@EnableWebSecurity

public class MultiHttpSecurityConfig {
   
   @Configuration
    @Order(1)                                                        
    public static class RestSecurityConfig extends WebSecurityConfigurerAdapter {

    @Bean
    public RestAuthenticationEntryPoint restAuthenticationEntryPoint() {
    RestAuthenticationEntryPoint entryPoint = new RestAuthenticationEntryPoint();
    entryPoint.setRealmName("<your_realm_name>");
    return entryPoint;
    }

    @Bean
    public RestAuthenticationProvider restAuthenticationProvider() {
    RestAuthenticationProvider authProvider = new RestAuthenticationProvider();
    return authProvider;
    }

    @Bean
    public RestSecurityFilter restSecurityFilter() {

    RestSecurityFilter filter = null;
    try {
    filter = new RestSecurityFilter(authenticationManagerBean());
    } catch (Exception e) {
    e.printStackTrace();
    }

    return filter;
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
    http.csrf().disable();
    http
    .antMatcher("/api/**")    
    .sessionManagement()
    .sessionCreationPolicy(SessionCreationPolicy.STATELESS).and()
    //.addFilterBefore(restSecurityFilter(), BasicAuthenticationFilter.class)
    .exceptionHandling()
    .authenticationEntryPoint(restAuthenticationEntryPoint()).and()
    .authorizeRequests()
    .antMatchers("/api/**")
    .authenticated().and().addFilterBefore(restSecurityFilter(), BasicAuthenticationFilter.class);
    

    }
    
    @Override
        protected void configure(AuthenticationManagerBuilder authManagerBuilder) throws Exception {
    authManagerBuilder.authenticationProvider(restAuthenticationProvider());
        }
    }

    @Configuration
    public static class FormSecurityConfig extends WebSecurityConfigurerAdapter {

    @Autowired
    DataSource dataSource;

    @Autowired
    private <custom_user_detail_service> customUserDetailsService;

    @Autowired
    CustomSecuritySuccessHandler customSecuritySuccessHandler;

    @Autowired
    CustomSecurityFailureHandler customSecurityFailureHandler;

    @Autowired
    private <password_encoder> passwordEncoder;
    
    @Autowired
    private CustomAccessDeniedHandler customAccessDeniedHandler;

    
    @Override
    public void configure(WebSecurity web) throws Exception {
    web.ignoring().antMatchers("/resources/**");
    }

    protected void configure(HttpSecurity http) throws Exception {
    http.csrf().disable();
    http.authorizeRequests()
    .antMatchers("/", "/login.html", "/app/**", "/assets/**", "/login","/failure","/register","/public/**", "/oauth/v1/**").permitAll().anyRequest().authenticated();
    http.formLogin().loginPage("/login").failureUrl("/")
    .successHandler(customSecuritySuccessHandler)
    .failureHandler(customSecurityFailureHandler).permitAll().and()

    .logout().logoutSuccessUrl("/login").permitAll().and()
                .rememberMe().and().exceptionHandling().accessDeniedHandler(customAccessDeniedHandler);
    
    
    return;
    }

    @Override
    protected void configure(AuthenticationManagerBuilder authManagerBuilder)
    throws Exception {
    authManagerBuilder.userDetailsService(customUserDetailsService)
    .passwordEncoder(passwordEncoder);
    }
    
    }

}


OAuth2 Security Configuartion:-

@Configuration
public class GlobalAuthenticationConfig extends GlobalAuthenticationConfigurerAdapter {
    
@Autowired
private <custom_user_detail_service> oAuthUserDetailService;
@Autowired
private <password_encoder> commonPasswordEncoder;
      
    @Override
public void init(AuthenticationManagerBuilder auth) throws Exception {
auth.userDetailsService(oAuthUserDetailService).passwordEncoder(commonPasswordEncoder);

}
}


@Configuration
@EnableAuthorizationServer
public class OAuth2AuthorizationServerConfig extends AuthorizationServerConfigurerAdapter  {

@Autowired
DataSource dataSource;
@Autowired
private AuthenticationManager authenticationManager;

@Override
public void configure(ClientDetailsServiceConfigurer clients) throws Exception {
clients.jdbc(dataSource);
}

@Bean
public TokenStore tokenStore() {
return new JdbcTokenStore(dataSource);
}

@Override
public void configure(AuthorizationServerEndpointsConfigurer endpoints) throws Exception {
endpoints.tokenStore(tokenStore()).authenticationManager(authenticationManager);

}

@Override
public void configure(AuthorizationServerSecurityConfigurer oauthServer) throws Exception {
oauthServer.allowFormAuthenticationForClients();
}

}

@Configuration
@EnableResourceServer
public class OAuth2ResourceServerConfig extends ResourceServerConfigurerAdapter  {

private static final String HU_REST_RESOURCE_ID = "rest_api";

@Autowired
DataSource dataSource;

@Bean
public TokenStore tokenStore() {
return new JdbcTokenStore(dataSource);
}


@Override
public void configure(ResourceServerSecurityConfigurer resources) {
resources.resourceId(HU_REST_RESOURCE_ID).stateless(false);
}

@Override
public void configure(HttpSecurity http) throws Exception {
http.
requestMatchers().antMatchers("/oauth/v1/**").and().
authorizeRequests().antMatchers("/oauth/v1/**").access("#oauth2.hasScope('read') or (!#oauth2.isOAuth() and hasRole('ROLE_USER'))");
}

}

With these configurations the incoming requests with URL pattern -
i. <context>/api/<version>/<some_request>  will be intercepted by RestSecurityConfig
ii. <context>/oauth/v1/<some_request> will be intercepted by OAuth2ResourceServerConfig
iii. All other requests will be intercepted by FormSecurityConfig

[Feel free to clone https://github.com/badalb/multi-security-config-web.git for detail code.]

Wednesday, 8 April 2015

Token Based REST API Security

While writing REST APIs we sometimes wonder how to secure them. Hosting REST services over HTTPS will make the communication channel secured but not individual REST APIs. 

The possible options to secure REST API could be -



  • 1. HTTP Basic Authentication
  • 2. OAuth based Authentication
  • 3. Token Based Authentication



Lets see how we can use token based security for fully stateless REST APIs.  To understand toke based security we need to understand  -

ACCESS KEY: Access key is an unique key string passed in request header for every http request client sends to the server. We can consider access key as user identification for REST paradigm.

SECRET KEY: For every REST API user/client we generate a secret key. This secret key is used to encrypt some plain text to generate a cipher which will be used as request signature.

REQUEST SIGNATURE/MESSAGE HASH: Its the cipher text generated after encrypting a plain text using the secret key. This string is passed as the request signature.

HMAC(<some_plain_text> , <SECRET KEY>) = <SOME_REQUEST_SIGNATURE>

HMAC: A keyed-hash message authentication code (HMAC) is a specific construction for calculating a message authentication code (MAC) involving a cryptographic hash function in combination with a secret cryptographic key.

Using this concepts client will generate a request signature send it to server as request signature along with actual parameters and the access key. Lets move on to some code examples so that we can understand it better.

Code Example

Lets assume that we store user access and secret key in some database. So the database table will look like  - 
<rest_user_key>

id
app_id
access_key
hmac_key
aes_key
1
<some_app>
access_key1
sec_key1

2
<some_app>
access_key2
sec_key2

3
<some_app>
access_key3
sec_key3

4
<some_app>
access_key4
sec_key4



HMAC JAVA Code :

public class HMACKeyGeneratorServiceImpl {

private static final char[] symbols = new char[36];
private static final String HMAC_SHA1_ALGORITHM = "HmacSHA1";
private static final String SHA1PRNG_ALGORITHM="SHA1PRNG";
private static final String SHA1_ALGORITHM="SHA-1";
private final char[] buf;
private final Random random = new Random();
static {
for (int idx = 0; idx < 10; ++idx)
symbols[idx] = (char) ('0' + idx);
for (int idx = 10; idx < 36; ++idx)
symbols[idx] = (char) ('a' + idx - 10);
}

HMACKeyGeneratorServiceImpl() {
buf = new char[20];
}

public String generateAccessKey() {
for (int idx = 0; idx < buf.length; ++idx)
buf[idx] = symbols[random.nextInt(symbols.length)];
return new String(buf).toUpperCase();
}
public String generateHMACKey() throws GeneralSecurityException {
SecureRandom prng = SecureRandom.getInstance(SHA1PRNG_ALGORITHM);
String randomNum = new Integer(prng.nextInt()).toString();
MessageDigest sha = MessageDigest.getInstance(SHA1_ALGORITHM);
byte[] result = sha.digest(randomNum.getBytes());
return hexEncode(result);
}
private String hexEncode(byte[] aInput) {
StringBuilder result = new StringBuilder();
char[] digits = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
'a', 'b', 'c', 'd', 'e', 'f' };
for (int idx = 0; idx < aInput.length; ++idx) {
byte b = aInput[idx];
result.append(digits[(b & 0xf0) >> 4]);
result.append(digits[b & 0x0f]);
}
return result.toString();
}
public String generateHMAC(String data, String hexEncodedKey) throws GeneralSecurityException,IOException {
String result = "";
byte[] keyBytes = hexEncodedKey.getBytes();
SecretKeySpec signingKey = new SecretKeySpec(keyBytes,HMAC_SHA1_ALGORITHM);
Mac mac = Mac.getInstance(HMAC_SHA1_ALGORITHM);
mac.init(signingKey);
byte[] rawHmac = mac.doFinal(data.getBytes());
byte[] hexBytes = new Hex().encode(rawHmac);
result = new String(hexBytes, "UTF-8");
return result;
}


}

Client Side Signature Generation:

Considering the fact that client want to post some data to server, client must generate a request signature shared with him. The server will generate a similar request signature and match them.
The client and server must agree on a common plaintext generating algorithm to get the same hash. 

Lets assume that we will follow <param_name1>=<param_value1>;<param_name2>=<param_value2>;
so we will consider param name, param value paired using = and multiple parameters are combined using; as delimiter.


Authentication Flow

Client Side: 

Client want to post to server n params, param1 till param n with values value1 till value n.

Client generates a signature taking subset of these parameters name value pair.

  • Access Key : <client_access_key>
  • Secret Key: <client_secret_key>
  • Request Signature: <signature> = HMAC(<param_name1>=<param_value1>;<param_name2>=<param_value2>;, <client_secret_key>)


So client send a request to server with ACCESS_KEY = <client_access_key> and MESSAGE_HASH=<signature> in header  along with the parameters.

the client should populate the bean below and send a list so that server can identify the parameters used to generate the signature 


public class RestParameter implements Comparable<RestParameter>, Serializable{

private static final long serialVersionUID = -8654122030780643503L;
private String paramName;
private String paramValue;
private String order;

........
}

Server Side:

  • Server side signature generation involves fetching the ACCESS KEY from request header. 
  • Using the access key which is unique it fetches the SECRET KEY from database. 
  • Using the List of RestParameter and with the help of order of the parameters it generates the signature plain text.
  • Using the Secret key from step 2 and HMAC algorithm written earlier it generates a message hash.

Considering the fact that same plaintext and secret key is used to generate the hash both message hash from header that client sent and the server generated hash must be equal to proceed further.



[A full working code base is available here: https://github.com/badalb/spring-rest-security-tokenbased]