17 Mart 2015 Salı

Lucene on clustered environment

I have a task to refresh lucene indexes periodically. However, when the application is deployed on a clustered environment, each node tries to start indexing process and others display the following error:

 Entity com.me.MyEntity Id 1783 Work Type org.hibernate.search.backend.UpdateLuceneWork  
 : org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/path_to_index/com.me.MyEntity/write.lock  
 at org.apache.lucene.store.Lock.obtain(Lock.java:84) [lucene-core-3.5.0.jar:3.5.0 1204425 - simon - 2011-11-21 11:20:12]  
 at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1108) [lucene-core-3.5.0.jar:3.5.0 1204425 - simon - 2011-11-21 11:20:12]  
 at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:127) [hibernate-search-engine-4.1.0.Final.jar:4.1.0.Final]  
 at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:102) [hibernate-search-engine-4.1.0.Final.jar:4.1.0.Final]  
 at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:119) [hibernate-search-engine-4.1.0.Final.jar:4.1.0.Final]  
 at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:99) [hibernate-search-engine-4.1.0.Final.jar:4.1.0.Final]  
 at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67) [hibernate-search-engine-4.1.0.Final.jar:4.1.0.Final]  
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_03]  
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) [rt.jar:1.7.0_03]  
 at java.util.concurrent.FutureTask.run(FutureTask.java:166) [rt.jar:1.7.0_03]  
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.7.0_03]  
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.7.0_03]  
 at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_03]  

Those links may help you:
http://stackoverflow.com/questions/1228833/sharing-a-java-synchronized-block-across-a-cluster-or-using-a-global-lock 
http://stackoverflow.com/questions/17921898/using-lucene-in-a-clustered-environment-with-shared-nfs
http://forums.terracotta.org/forums/posts/list/6707.page
http://stackoverflow.com/questions/13486426/how-to-unlock-the-index-directory-in-lucene

But, i solved my case by adding a random timer to sleep :
 @Schedule(...)   
      public void myTask() {  
           try {  
                long sleepFor = (long)(Math.random() * 100000);  
                Thread.sleep(sleepFor); // hope nodes sleep different amount of time  
                log.infov("Sleep for this cluster : " + sleepFor);  
                Directory directory = FSDirectory.open(new File("path_to_index/com.me.MyEntity/"));  
             if (directory.fileExists("write.lock")) {  
               log.infov("ATTENTION!!! Already done by the other node!!");  
               return;  
             }  
             directory.close();  
 ...  

The random timer weakens the syncronization between nodes and the one who sleeps the least does the job! Then others know about this by checking the lock file..

Hope this helps..

Hiç yorum yok:

Yorum Gönder