In this blogpost we will see how we can create an in-memory cache with time and size based eviction. Time or time to live based eviction will check cache objects with last access time greater than a predefined value where as size based eviction will remove objects with LRU algorithm
Key for cache:
public class CacheKey<T> {
private T key;
public CacheKey(T key) {
this.key = key;
}
public T getKey() {
return key;
}
public void setKey(T key) {
this.key = key;
}
}
Object as value for cache:
public class CachedObject<T> {
private long lastAccessedTime;
private T value;
public CachedObject(T value) {
this.lastAccessedTime = System.currentTimeMillis();
this.value = value;
}
public long getLastAccessedTime() {
return lastAccessedTime;
}
public void setLastAccessedTime(long lastAccessedTime) {
this.lastAccessedTime = lastAccessedTime;
}
public T getValue() {
return value;
}
public void setValue(T value) {
this.value = value;
}
LRU based cache storage :
public class LRUCache<K, V> {
private final int maxSize;
private ConcurrentHashMap<K, V> map;
private ConcurrentLinkedQueue<K> queue;
public LRUCache(final int maxSize) {
this.maxSize = maxSize;
map = new ConcurrentHashMap<K, V>(maxSize);
queue = new ConcurrentLinkedQueue<K>();
}
public void put(final K key, final V value) {
if (map.containsKey(key)) {
// remove the key from the FIFO queue
queue.remove(key);
}
while (queue.size() >= maxSize) {
K oldestKey = queue.poll();
System.out.println("Key to remove : " + oldestKey);
if (null != oldestKey) {
map.remove(oldestKey);
}
}
queue.add(key);
map.put(key, value);
}
public V get(final K key) {
if (map.containsKey(key)) {
// remove from queue and add it again in FIFO queue
queue.remove(key);
queue.add(key);
}
return map.get(key);
}
public void remove(final K key) {
if (map.containsKey(key)) {
// remove from queue and add it again in FIFO queue
queue.remove(key);
map.remove(key);
}
}
public ConcurrentHashMap<K, V> getMap() {
return map;
}
public void setMap(ConcurrentHashMap<K, V> map) {
this.map = map;
}
public ConcurrentLinkedQueue<K> getQueue() {
return queue;
}
public void setQueue(ConcurrentLinkedQueue<K> queue) {
this.queue = queue;
}
public int getMaxSize() {
return maxSize;
}
}
In memory cache implementation:
public class InMemoryCache<K,V> {
private long timeToLive;
private LRUCache<CacheKey<?>, CachedObject<?>> cache;
public InMemoryCache(final long lifeTime, final long timerInterval, final int maxItems) {
this.timeToLive = lifeTime * 1000;
cache = new LRUCache<CacheKey<?>, CachedObject<?>>(maxItems);
if (timeToLive > 0 && timerInterval > 0) {
Thread t = new Thread(new Runnable() {
public void run() {
while (true) {
try {
Thread.sleep(timeToLive);
} catch (InterruptedException ex) {
}
cleanup();
}
}
private void cleanup() {
long now = System.currentTimeMillis();
List<CacheKey<?>> deleteKey = new ArrayList<CacheKey<?>>();
CacheKey<?> key = null;
CachedObject<?> cObject = null;
for (Map.Entry<CacheKey<?>, CachedObject<?>> entry : cache.getMap().entrySet()) {
key = entry.getKey();
cObject = entry.getValue();
if (cObject != null && (now > (timeToLive + cObject.getLastAccessedTime()))) {
deleteKey.add(key);
}
}
for (CacheKey<?> cacheKey : deleteKey) {
cache.remove(cacheKey);
Thread.yield();
}
}
});
t.setDaemon(true);
t.start();
}
}
public void put(CacheKey<?> key, CachedObject<?> value) {
cache.put(key, value);
}
public CachedObject<?> get(CacheKey<?> key) {
CachedObject<?> c = (CachedObject<?>) cache.get(key);
if (c == null)
return null;
else {
c.setLastAccessedTime(System.currentTimeMillis());
return c;
}
}
public void remove(CacheKey<?> key) {
cache.remove(key);
}
public int size() {
return cache.getMap().size();
}
}
So, in LRUCache storage implementation, once the predefined storage capacity is reached, it will check objects fetched earliest and remove that from storage. The key access order is maintained in a concurrent linked queue.
InMemoryCache keeps on running a background thread to check objects which has reached the time to live (idle time) and remove them from the cache.