CLOVER🍀

That was when it all began.

LuceneのFuzzyQueryとMoreLikeThisで遊んでみました

最近読んでいたSolrやElasticsearch関連の本で、ちょっと気になっていたクエリで遊んでみました。Luceneで。

気になっていたクエリとは、

  • FuzzyQuery
  • MoreLikeThisQuery

です。

FuzzyQueryはあいまい検索、MoreLikeThisは似たドキュメントを取得するためのクエリです。

まあ、使っていってみましょう。

準備

とりあえず、依存関係の定義を。
build.sbt

name := "lucene-fuzzy-more-like-this"

version := "0.0.1-SNAPSHOT"

scalaVersion := "2.11.0"

organization := "org.littlewings"

scalacOptions ++= Seq("-Xlint", "-deprecation", "-unchecked", "-feature")

incOptions := incOptions.value.withNameHashing(true)

val luceneVersion = "4.8.0"

libraryDependencies ++= Seq(
  "org.apache.lucene" % "lucene-analyzers-kuromoji" % luceneVersion,
  "org.apache.lucene" % "lucene-queries" % luceneVersion
)

利用するLuceneバージョンは、4.8.0とします。

ソースコードの母体は、こんな感じです。
src/main/scala/org/littlewings/lucene/fuzzymorelikethis/LuceneFuzzyMoreLikeThis.scala

package org.littlewings.lucene.fuzzymorelikethis

import scala.collection.JavaConverters._

import java.io.StringReader

import org.apache.lucene.analysis.Analyzer
import org.apache.lucene.analysis.ja.JapaneseAnalyzer
import org.apache.lucene.document.{Document, Field, TextField}
import org.apache.lucene.index.{DirectoryReader, IndexReader, IndexWriter, IndexWriterConfig, Term}
import org.apache.lucene.queries.mlt.{MoreLikeThis, MoreLikeThisQuery}
import org.apache.lucene.search.{BooleanQuery, FuzzyQuery, IndexSearcher, MatchAllDocsQuery, Query, Sort, TermQuery}
import org.apache.lucene.search.{ScoreDoc, TopDocs, TopFieldCollector, TotalHitCountCollector}
import org.apache.lucene.store.{Directory, RAMDirectory}
import org.apache.lucene.util.Version

object LuceneFuzzyMoreLikeThis {
  def main(args: Array[String]): Unit = {
    val luceneVersion = Version.LUCENE_CURRENT
    def analyzer = createAnalyzer(luceneVersion)

    for (directory <- new RAMDirectory) {
      // ドキュメントの登録
      registerDocuments(directory, luceneVersion, analyzer)

      // 各種検索を実行
    }
  }

  implicit class AutoCloseableWrapper[A <: AutoCloseable](val underlying: A) extends AnyVal {
    def foreach(fun: A => Unit): Unit =
      try {
        fun(underlying)
      } finally {
        underlying.close()
      }
  }
}

Analyzerは、Kuromojiとします。

  private def createAnalyzer(version: Version): Analyzer =
    new JapaneseAnalyzer(version)

登録するドキュメントは、こんな感じにします。

  private def registerDocuments(directory: Directory, version: Version, analyzer: Analyzer): Unit =
    for {
      writer <- new IndexWriter(directory,
                                new IndexWriterConfig(version, analyzer))
      text <- Array("すもももももももものうち。",
                    "メガネは顔の一部です。",
                    "日本経済新聞でモバゲーの記事を読んだ。",
                    "Java, Scala, Groovy, Clojure",
                    "LUCENE、SOLR、Lucene, Solr",
                    "アイウエオカキクケコさしすせそABCXYZ123456",
                    "Lucene is a full-featured text search engine library written in Java.")
    } {
      val document = new Document
      document.add(new TextField("text", text, Field.Store.YES))
      writer.addDocument(document)
    }

なお、今回のプログラムはインデックスはRAMDirectoryに保存して使い捨てのため、各Documentのidは0〜6になります。これは、後でMoreLikeThisQueryの時に使います。

検索とExplain

単に検索するためのQueryを投げてもいいのですが、ここは同時にExplainもやった方がいいかなぁと思いまして。Queryを渡すと検索結果を表示しつつ、Explainも行うような以下のメソッドを用意しました。

  private def searchAndExplain(reader: IndexReader,
                               query: Query): Unit = {
    val searcher = new IndexSearcher(reader)

    println(s"Input Query => [$query]")

    val totalHitCountCollector = new TotalHitCountCollector
    searcher.search(query, totalHitCountCollector)
    val totalHits = totalHitCountCollector.getTotalHits

    val docCollector =
      TopFieldCollector.create(Sort.RELEVANCE,
                               1000,
                               true,
                               true,
                               true,
                               true)

    searcher.search(query, docCollector)
    val topDocs = docCollector.topDocs
    val hits = topDocs.scoreDocs

    hits.foreach { h =>
      println("---------------")
      val hitDoc = searcher.doc(h.doc)
      println(s"   ScoreDoc, id[${h.score}:${h.doc}]: Doc => " +
              hitDoc
                .getFields
                .asScala
                .map(_.stringValue)
                .mkString("|"))

      val explanation = searcher.explain(query, h.doc)

      println()
      println("Explanation As String => ")
      explanation.toString.lines.map("    " + _).foreach(println)
      println("---------------")
    }
  }

これを使って、検索結果がどうなるかを見るとともに、Explainの結果も確認します。

TermQuery

まずは簡単のために、TermQueryで確認してみましょう。

  private def termQueries(directory: Directory,
                           analyzer: Analyzer,
                           terms: Term*): Unit = {
    println("==================== TermQuery Start ====================")

    for {
      reader <- DirectoryReader.open(directory)
      term <- terms
    } {
      val query = new TermQuery(term)
      searchAndExplain(reader, query)
    }

    println("==================== TermQuery End ====================")
  }

検索条件を与える側は、このようにします。

      termQueries(directory,
                  analyzer,
                  new Term("text", "java"),
                  new Term("text", "jabo"),
                  new Term("text", "日本"),
                  new Term("text", "日韓"),
                  new Term("text", "メガホン"))

微妙に、Documentに含まれているTermとは、異なる単語が入っています。

よって、結果はこのように。
*TermQueryは、Explainの結果は端折ります

Input Query => [text:java]
---------------
   ScoreDoc, id[0.92364895:3]: Doc => Java, Scala, Groovy, Clojure
---------------
   ScoreDoc, id[0.46182448:6]: Doc => Lucene is a full-featured text search engine library written in Java.
---------------
Input Query => [text:jabo]
Input Query => [text:日本]
---------------
   ScoreDoc, id[0.84478617:2]: Doc => 日本経済新聞でモバゲーの記事を読んだ。
---------------
Input Query => [text:日韓]
Input Query => [text:メガホン]

当然のことながら、入力誤りをしている単語では、Documentはヒットしません。

FuzzyQuery

では、続いてFuzzyQueryを試してみましょう。FuzzyQueryを使うと、少々入力する単語が間違っていても、それっぽい検索結果を引っ張ってきてくれるQueryになります。

  private def fuzzyQueries(directory: Directory,
                           analyzer: Analyzer,
                           terms: (Term, Int)*): Unit = {
    println("==================== FuzzyQuery Start ====================")

    for {
      reader <- DirectoryReader.open(directory)
      (term, maxEdit) <- terms
    } {
      val query = new FuzzyQuery(term, maxEdit)

      println("Rewrited Query And Term => "
              + query
              .rewrite(reader)
              .asInstanceOf[BooleanQuery]
              .getClauses
              .flatMap { bq =>
                Array(bq, bq.getQuery.asInstanceOf[TermQuery].getTerm.text)
              }
              .mkString(", "))

      searchAndExplain(reader, query)
    }

    println("==================== FuzzyQuery End ====================")
  }

Termとセットで渡しているIntはmaxEditsで、FuzzyQueryに設定する距離を表します。デフォルトは2で、0〜2の間で調整します。つまり、いきなりMAX状態なわけですね。

また、FuzzyQueryは内部的にQueryをリライトするみたいなので、その時の結果も表示するようにしています。

これを使うと、Did You Mean?的なことができるようですが…。

で、検索条件を与える側。

      fuzzyQueries(directory,
                   analyzer,
                   (new Term("text", "java"), 2),
                   (new Term("text", "jabo"), 1),
                   (new Term("text", "jabo"), 2),
                   (new Term("text", "日本"), 2),
                   (new Term("text", "日韓"), 2),
                   (new Term("text", "メガホン"), 2))

実行結果は、こちらになります。

Rewrited Query And Term => text:java, java
Input Query => [text:java~2]
---------------
   ScoreDoc, id[0.92364895:3]: Doc => Java, Scala, Groovy, Clojure

Explanation As String => 
    0.92364895 = (MATCH) weight(text:java in 3) [DefaultSimilarity], result of:
      0.92364895 = fieldWeight in 3, product of:
        1.0 = tf(freq=1.0), with freq of:
          1.0 = termFreq=1.0
        1.8472979 = idf(docFreq=2, maxDocs=7)
        0.5 = fieldNorm(doc=3)
---------------
---------------
   ScoreDoc, id[0.46182448:6]: Doc => Lucene is a full-featured text search engine library written in Java.

Explanation As String => 
    0.46182448 = (MATCH) weight(text:java in 6) [DefaultSimilarity], result of:
      0.46182448 = fieldWeight in 6, product of:
        1.0 = tf(freq=1.0), with freq of:
          1.0 = termFreq=1.0
        1.8472979 = idf(docFreq=2, maxDocs=7)
        0.25 = fieldNorm(doc=6)
---------------

プログラムの都合上、最初にリライト後のQueryが出ています。この検索条件は、ヒットする単語そのものを渡しているので、特に面白いところはありません。

続いて、maxEditsを1にした「jabo」という単語で。

Rewrited Query And Term => 
Input Query => [text:jabo~1]

検索結果、なし…。

では、maxEditsを2に。

Rewrited Query And Term => text:java^0.5, java
Input Query => [text:jabo~2]
---------------
   ScoreDoc, id[0.92364895:3]: Doc => Java, Scala, Groovy, Clojure

Explanation As String => 
    0.92364895 = (MATCH) weight(text:java^0.5 in 3) [DefaultSimilarity], result of:
      0.92364895 = fieldWeight in 3, product of:
        1.0 = tf(freq=1.0), with freq of:
          1.0 = termFreq=1.0
        1.8472979 = idf(docFreq=2, maxDocs=7)
        0.5 = fieldNorm(doc=3)
---------------
---------------
   ScoreDoc, id[0.46182448:6]: Doc => Lucene is a full-featured text search engine library written in Java.

Explanation As String => 
    0.46182448 = (MATCH) weight(text:java^0.5 in 6) [DefaultSimilarity], result of:
      0.46182448 = fieldWeight in 6, product of:
        1.0 = tf(freq=1.0), with freq of:
          1.0 = termFreq=1.0
        1.8472979 = idf(docFreq=2, maxDocs=7)
        0.25 = fieldNorm(doc=6)
---------------

今度は、ヒットしました。

Explainの結果をよく見ると、ここに変化がありますね。

    0.92364895 = (MATCH) weight(text:java^0.5 in 3) [DefaultSimilarity], result of:

リライト後のQueryは、このようになっています。

Rewrited Query And Term => text:java^0.5, java

ちなみに、Queryそのものが

text:java^0.5

で、後ろの

java

はTermです。

取得していたのは、この部分ですね。

      println("Rewrited Query And Term => "
              + query
              .rewrite(reader)
              .asInstanceOf[BooleanQuery]
              .getClauses
              .flatMap { bq =>
                Array(bq, bq.getQuery.asInstanceOf[TermQuery].getTerm.text)
              }
              .mkString(", "))

TermQuery#getTerm#textしてるところですね、これでFuzzyQueryが実際に検索しようとしている単語を取得できます。これを使って、Did You Mean?的な?

続いて、「日本」では面白い結果になりませんが

Rewrited Query And Term => text:日本, 日本
Input Query => [text:日本~2]
---------------
   ScoreDoc, id[0.84478617:2]: Doc => 日本経済新聞でモバゲーの記事を読んだ。

Explanation As String => 
    0.84478617 = (MATCH) weight(text:日本 in 2) [DefaultSimilarity], result of:
      0.84478617 = fieldWeight in 2, product of:
        1.0 = tf(freq=1.0), with freq of:
          1.0 = termFreq=1.0
        2.252763 = idf(docFreq=1, maxDocs=7)
        0.375 = fieldNorm(doc=2)
---------------

「日韓」や「メガホン」で、「日本」や「メガネ」にヒットします。

Rewrited Query And Term => text:日本^0.5, 日本
Input Query => [text:日韓~2]
---------------
   ScoreDoc, id[0.84478617:2]: Doc => 日本経済新聞でモバゲーの記事を読んだ。

Explanation As String => 
    0.84478617 = (MATCH) weight(text:日本^0.5 in 2) [DefaultSimilarity], result of:
      0.84478617 = fieldWeight in 2, product of:
        1.0 = tf(freq=1.0), with freq of:
          1.0 = termFreq=1.0
        2.252763 = idf(docFreq=1, maxDocs=7)
        0.375 = fieldNorm(doc=2)
---------------
Rewrited Query And Term => text:メガネ^0.3333333, メガネ
Input Query => [text:メガホン~2]
---------------
   ScoreDoc, id[1.1263815:1]: Doc => メガネは顔の一部です。

Explanation As String => 
    1.1263815 = (MATCH) weight(text:メガネ^0.3333333 in 1) [DefaultSimilarity], result of:
      1.1263815 = fieldWeight in 1, product of:
        1.0 = tf(freq=1.0), with freq of:
          1.0 = termFreq=1.0
        2.252763 = idf(docFreq=1, maxDocs=7)
        0.5 = fieldNorm(doc=1)
---------------

なるほどー。

MoreLikeThis

最後は、MoreLikeThisです。こちらは、先ほどの検索するTermを受けとるのではなく、DocumentのIDを受け取ることにします。

  private def moreLikeThisQueries(directory: Directory,
                                  analyzer: Analyzer,
                                  docIds: Int*): Unit = {
    println("==================== MoreLikeThisQuery Start ====================")

    for {
      reader <- DirectoryReader.open(directory)
      docId <- docIds
    } {
      val mlt = new MoreLikeThis(reader)
      mlt.setAnalyzer(analyzer)
      mlt.setFieldNames(Array("text"))
      mlt.setMinTermFreq(0)
      mlt.setMinDocFreq(0)

      // すでに検索を行ったつもりで、DocumentのIDを受け取ることとする
      val query = mlt.like(docId)

      searchAndExplain(reader, query)
    }

    println("==================== MoreLikeThisQuery End ====================")
  }

ここでは、MoreLikeThisというクラスを使います。まずIndexReaderをコンストラクタに指定してインスタンスを生成して、Analyzerと対象となるFieldを指定します。

MoreLikeThisは、この後に実際の検索を行うためのQueryを作成することになるのですが、指定のDocumentから取得したFieldをトークナイズするためのAnalyzerとField名ですね。Field名は、配列で指定します。

      val mlt = new MoreLikeThis(reader)
      mlt.setAnalyzer(analyzer)
      mlt.setFieldNames(Array("text"))

ちなみに、今回はこの時対象とするする単語の出現頻度の最小値(MinTermFreq)と、Documentの最小の閾値(MinDocFreq)を設定しています。

      mlt.setMinTermFreq(0)
      mlt.setMinDocFreq(0)

デフォルト値は、MinTermFreqが2でMinDocFreqが5なのですが、これをいじらないと今回のデータでは何も引っ張ってきてくれないので…。

あとは、DocumentのIDを与えてQueryを生成します。

      // すでに検索を行ったつもりで、DocumentのIDを受け取ることとする
      val query = mlt.like(docId)

つまり、1度検索して取得したDocumentから、関連するDocumentを引っ張ってくる機能なわけですね。

ちなみに、MoreLikeThis#likeにはオーバーロード版でjava.io.ReaderとField名を引数に取るものもあるのですが、ここから生成されるQueryって、そのReaderに含まれる単語の情報からしか作られてないような…。

ま、今回はID指定ということで。

呼び出し側のコード。

      moreLikeThisQueries(directory,
                          analyzer,
                          3,
                          6)

DocumentのIDには、3と6を指定しているので対象のDocumentは

                    "Java, Scala, Groovy, Clojure",

                    "Lucene is a full-featured text search engine library written in Java.")

となります。

では、実行結果。こちらも、本人(DocumentのID=6)入りです…。

Input Query => [text:groovy text:clojure text:scala text:java]
---------------
   ScoreDoc, id[2.158549:3]: Doc => Java, Scala, Groovy, Clojure

Explanation As String => 
    2.158549 = (MATCH) sum of:
      0.58777237 = (MATCH) weight(text:groovy in 3) [DefaultSimilarity], result of:
        0.58777237 = score(doc=3,freq=1.0 = termFreq=1.0
    ), product of:
          0.5218235 = queryWeight, product of:
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.2316371 = queryNorm
          1.1263815 = fieldWeight in 3, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.5 = fieldNorm(doc=3)
      0.58777237 = (MATCH) weight(text:clojure in 3) [DefaultSimilarity], result of:
        0.58777237 = score(doc=3,freq=1.0 = termFreq=1.0
    ), product of:
          0.5218235 = queryWeight, product of:
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.2316371 = queryNorm
          1.1263815 = fieldWeight in 3, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.5 = fieldNorm(doc=3)
      0.58777237 = (MATCH) weight(text:scala in 3) [DefaultSimilarity], result of:
        0.58777237 = score(doc=3,freq=1.0 = termFreq=1.0
    ), product of:
          0.5218235 = queryWeight, product of:
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.2316371 = queryNorm
          1.1263815 = fieldWeight in 3, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.5 = fieldNorm(doc=3)
      0.3952319 = (MATCH) weight(text:java in 3) [DefaultSimilarity], result of:
        0.3952319 = score(doc=3,freq=1.0 = termFreq=1.0
    ), product of:
          0.42790273 = queryWeight, product of:
            1.8472979 = idf(docFreq=2, maxDocs=7)
            0.2316371 = queryNorm
          0.92364895 = fieldWeight in 3, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            1.8472979 = idf(docFreq=2, maxDocs=7)
            0.5 = fieldNorm(doc=3)
---------------
---------------
   ScoreDoc, id[0.049403988:6]: Doc => Lucene is a full-featured text search engine library written in Java.

Explanation As String => 
    0.049403988 = (MATCH) product of:
      0.19761595 = (MATCH) sum of:
        0.19761595 = (MATCH) weight(text:java in 6) [DefaultSimilarity], result of:
          0.19761595 = score(doc=6,freq=1.0 = termFreq=1.0
    ), product of:
            0.42790273 = queryWeight, product of:
              1.8472979 = idf(docFreq=2, maxDocs=7)
              0.2316371 = queryNorm
            0.46182448 = fieldWeight in 6, product of:
              1.0 = tf(freq=1.0), with freq of:
                1.0 = termFreq=1.0
              1.8472979 = idf(docFreq=2, maxDocs=7)
              0.25 = fieldNorm(doc=6)
      0.25 = coord(1/4)
---------------

Documentの3に含まれていた単語が、そのままQueryになっています。

Input Query => [text:groovy text:clojure text:scala text:java]

なので、本人(ID=3のDocument)も入っちゃってますけどね…。

もうひとつの結果も、一応載せておきます。こちらも本人が入ってますが…。

Input Query => [text:a text:full text:written text:featured text:library text:is text:in text:engine text:text text:search text:java text:lucene]
---------------
   ScoreDoc, id[1.8969455:6]: Doc => Lucene is a full-featured text search engine library written in Java.

Explanation As String => 
    1.8969457 = (MATCH) sum of:
      0.16720767 = (MATCH) weight(text:a in 6) [DefaultSimilarity], result of:
        0.16720767 = score(doc=6,freq=1.0 = termFreq=1.0
    ), product of:
          0.2968935 = queryWeight, product of:
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.13179083 = queryNorm
          0.56319076 = fieldWeight in 6, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.25 = fieldNorm(doc=6)
      0.16720767 = (MATCH) weight(text:full in 6) [DefaultSimilarity], result of:
        0.16720767 = score(doc=6,freq=1.0 = termFreq=1.0
    ), product of:
          0.2968935 = queryWeight, product of:
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.13179083 = queryNorm
          0.56319076 = fieldWeight in 6, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.25 = fieldNorm(doc=6)
      0.16720767 = (MATCH) weight(text:written in 6) [DefaultSimilarity], result of:
        0.16720767 = score(doc=6,freq=1.0 = termFreq=1.0
    ), product of:
          0.2968935 = queryWeight, product of:
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.13179083 = queryNorm
          0.56319076 = fieldWeight in 6, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.25 = fieldNorm(doc=6)
      0.16720767 = (MATCH) weight(text:featured in 6) [DefaultSimilarity], result of:
        0.16720767 = score(doc=6,freq=1.0 = termFreq=1.0
    ), product of:
          0.2968935 = queryWeight, product of:
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.13179083 = queryNorm
          0.56319076 = fieldWeight in 6, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.25 = fieldNorm(doc=6)
      0.16720767 = (MATCH) weight(text:library in 6) [DefaultSimilarity], result of:
        0.16720767 = score(doc=6,freq=1.0 = termFreq=1.0
    ), product of:
          0.2968935 = queryWeight, product of:
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.13179083 = queryNorm
          0.56319076 = fieldWeight in 6, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.25 = fieldNorm(doc=6)
      0.16720767 = (MATCH) weight(text:is in 6) [DefaultSimilarity], result of:
        0.16720767 = score(doc=6,freq=1.0 = termFreq=1.0
    ), product of:
          0.2968935 = queryWeight, product of:
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.13179083 = queryNorm
          0.56319076 = fieldWeight in 6, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.25 = fieldNorm(doc=6)
      0.16720767 = (MATCH) weight(text:in in 6) [DefaultSimilarity], result of:
        0.16720767 = score(doc=6,freq=1.0 = termFreq=1.0
    ), product of:
          0.2968935 = queryWeight, product of:
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.13179083 = queryNorm
          0.56319076 = fieldWeight in 6, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.25 = fieldNorm(doc=6)
      0.16720767 = (MATCH) weight(text:engine in 6) [DefaultSimilarity], result of:
        0.16720767 = score(doc=6,freq=1.0 = termFreq=1.0
    ), product of:
          0.2968935 = queryWeight, product of:
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.13179083 = queryNorm
          0.56319076 = fieldWeight in 6, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.25 = fieldNorm(doc=6)
      0.16720767 = (MATCH) weight(text:text in 6) [DefaultSimilarity], result of:
        0.16720767 = score(doc=6,freq=1.0 = termFreq=1.0
    ), product of:
          0.2968935 = queryWeight, product of:
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.13179083 = queryNorm
          0.56319076 = fieldWeight in 6, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.25 = fieldNorm(doc=6)
      0.16720767 = (MATCH) weight(text:search in 6) [DefaultSimilarity], result of:
        0.16720767 = score(doc=6,freq=1.0 = termFreq=1.0
    ), product of:
          0.2968935 = queryWeight, product of:
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.13179083 = queryNorm
          0.56319076 = fieldWeight in 6, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            2.252763 = idf(docFreq=1, maxDocs=7)
            0.25 = fieldNorm(doc=6)
      0.11243437 = (MATCH) weight(text:java in 6) [DefaultSimilarity], result of:
        0.11243437 = score(doc=6,freq=1.0 = termFreq=1.0
    ), product of:
          0.24345693 = queryWeight, product of:
            1.8472979 = idf(docFreq=2, maxDocs=7)
            0.13179083 = queryNorm
          0.46182448 = fieldWeight in 6, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            1.8472979 = idf(docFreq=2, maxDocs=7)
            0.25 = fieldNorm(doc=6)
      0.11243437 = (MATCH) weight(text:lucene in 6) [DefaultSimilarity], result of:
        0.11243437 = score(doc=6,freq=1.0 = termFreq=1.0
    ), product of:
          0.24345693 = queryWeight, product of:
            1.8472979 = idf(docFreq=2, maxDocs=7)
            0.13179083 = queryNorm
          0.46182448 = fieldWeight in 6, product of:
            1.0 = tf(freq=1.0), with freq of:
              1.0 = termFreq=1.0
            1.8472979 = idf(docFreq=2, maxDocs=7)
            0.25 = fieldNorm(doc=6)
---------------
---------------
   ScoreDoc, id[0.026501035:4]: Doc => LUCENE、SOLR、Lucene, Solr

Explanation As String => 
    0.026501035 = (MATCH) product of:
      0.31801242 = (MATCH) sum of:
        0.31801242 = (MATCH) weight(text:lucene in 4) [DefaultSimilarity], result of:
          0.31801242 = score(doc=4,freq=2.0 = termFreq=2.0
    ), product of:
            0.24345693 = queryWeight, product of:
              1.8472979 = idf(docFreq=2, maxDocs=7)
              0.13179083 = queryNorm
            1.3062369 = fieldWeight in 4, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8472979 = idf(docFreq=2, maxDocs=7)
              0.5 = fieldNorm(doc=4)
      0.083333336 = coord(1/12)
---------------
---------------
   ScoreDoc, id[0.018739063:3]: Doc => Java, Scala, Groovy, Clojure

Explanation As String => 
    0.018739063 = (MATCH) product of:
      0.22486874 = (MATCH) sum of:
        0.22486874 = (MATCH) weight(text:java in 3) [DefaultSimilarity], result of:
          0.22486874 = score(doc=3,freq=1.0 = termFreq=1.0
    ), product of:
            0.24345693 = queryWeight, product of:
              1.8472979 = idf(docFreq=2, maxDocs=7)
              0.13179083 = queryNorm
            0.92364895 = fieldWeight in 3, product of:
              1.0 = tf(freq=1.0), with freq of:
                1.0 = termFreq=1.0
              1.8472979 = idf(docFreq=2, maxDocs=7)
              0.5 = fieldNorm(doc=3)
      0.083333336 = coord(1/12)
---------------

今回作成したソースコードは、こちらにアップしています。

https://github.com/kazuhira-r/lucene-examples/tree/master/lucene-fuzzy-more-like-this