设为首页 收藏本站
查看: 1584|回复: 0

[经验分享] solr4.3 solrconfig.xml配置文件

[复制链接]
累计签到:1 天
连续签到:1 天
发表于 2015-5-18 09:00:45 | 显示全部楼层 |阅读模式
<?xml version="1.0" encoding="UTF-8" ?>
<config>

<!--表示solr底层使用的是lucene版本-->
  <luceneMatchVersion>LUCENE_43</luceneMatchVersion>


  <!-- 表示solr引用包的位置,当dir对应的目录不存在时候,会忽略此属性-->
  <lib dir="../../../contrib/extraction/lib" regex=".*\.jar" />
  <lib dir="../../../dist/" regex="solr-cell-\d.*\.jar" />

  <lib dir="../../../contrib/clustering/lib/" regex=".*\.jar" />
  <lib dir="../../../dist/" regex="solr-clustering-\d.*\.jar" />

  <lib dir="../../../contrib/langid/lib/" regex=".*\.jar" />
  <lib dir="../../../dist/" regex="solr-langid-\d.*\.jar" />

  <lib dir="../../../contrib/velocity/lib" regex=".*\.jar" />
  <lib dir="../../../dist/" regex="solr-velocity-\d.*\.jar" />

  <lib dir="/non/existent/dir/yields/warning" />

  <!--定义了索引数据和日志文件的存放位置-->
  <dataDir>${solr.data.dir:}</dataDir>


  <!--

        索引存储方案,共有以下存储方案
           1、solr.StandardDirectoryFactory,这是一个基于文件系统存储目录的工厂,它会试图选择最好的实现基于你当前的操作系统和Java虚拟机版本。
           2、solr.SimpleFSDirectoryFactory,适用于小型应用程序,不支持大数据和多线程。
           3、solr.NIOFSDirectoryFactory,适用于多线程环境,但是不适用在windows平台(很慢),是因为JVM还存在bug。
           4、solr.MMapDirectoryFactory,这个是solr3.1到4.0版本在linux64位系统下默认的实现。它是通过使用虚拟内存和内核特性调用
             mmap去访问存储在磁盘中的索引文件。它允许lucene或solr直接访问I/O缓存。如果不需要近实时搜索功能,使用此工厂是个不错的方案。
           5、solr.NRTCachingDirectoryFactory,此工厂设计目的是存储部分索引在内存中,从而加快了近实时搜索的速度。
           6、solr.RAMDirectoryFactory,这是一个内存存储方案,不能持久化存储,在系统重启或服务器crash时数据会丢失。且不支持索引复制
           <directoryFactory class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}" name="DirectoryFactory">  
               <str name="solr.hdfs.home">${solr.hdfs.home:}</str>  
               <str name="solr.hdfs.confdir">${solr.hdfs.confdir:}</str>  
               <str name="solr.hdfs.blockcache.enabled">${solr.hdfs.blockcache.enabled:true}</str>  
               <str name="solr.hdfs.blockcache.global">${solr.hdfs.blockcache.global:true}</str>  
            </directoryFactory>  
    -->
  <directoryFactory name="DirectoryFactory"
                    class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>

  <!--
          编解码工厂允许使用自定义的编解码器。例如:如果想启动per-field DocValues格式, 可以在solrconfig.xml里面设置SchemaCodecFactory:
                    docValuesFormat="Lucene42": 这是默认设置,所有数据会被加载到堆内存中。
          docValuesFormat="Disk": 这是另外一个实现,将部分数据存储在磁盘上。
          docValuesFormat="SimpleText": 文本格式,非常慢,用于学习。

        定义的格式的CodecFactory反向索引。SchemaCodecFactory默认实现,这是官方的Lucene指数格式,但挂钩的模式提供域定制
        帖子列表和每个文档的值在fieldType元素(postingsFormat/docValuesFormat)。注意,大多数的替代实现
        实验,所以如果你选择定制索引格式,它的好想法通过IndexWriter.addIndexes转换回官方格式如(IndexReader)
        升级到新版本之前,以避免不必要的改变符号。
        当指定ManagedIndexSchemaFactory作为Solr为加载的模式。他在“managedSchemaResourceName”资源命名,而不是schema.xml。
        注意,资源不能叫schema.xml管理模式。如果管理模式不存在,Solr将创建它在阅读模式。xml,然后重命名”模式。xml的“schema.xml.bak”。
        不要手动编辑管理模式——外部修改将被忽略和覆盖的模式修改REST API调用。
        当指定ManagedIndexSchemaFactory可变= true,模式REST API调用将被允许修改,否则,错误响应为这些请求返回。
  -->
  <codecFactory class="solr.SchemaCodecFactory"/>
  <schemaFactory class="ClassicIndexSchemaFactory"/>

  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
       Index Config - These settings control low-level behavior of indexing
       Most example settings here show the default value, but are commented
       out, to more easily see where customizations have been made.

       Note: This replaces <indexDefaults> and <mainIndex> from older versions
       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        用于设置索引的低级别的属性
        -->
  <indexConfig>
    <!-- IndexWriter等待解锁的最长时间(毫秒) -->
    <writeLockTimeout>1000</writeLockTimeout>

    <!--
         同步线程的最大数量索引文件立刻IndexWriter;如果超过这一点许多线程到达他们将等待其他人完成。
         默认Solr / Lucene是8。-->
    <maxIndexingThreads>8</maxIndexingThreads>

    <!-- Expert: Enabling compound file will use less files for the index,
         using fewer file descriptors on the expense of performance decrease.
         Default in Lucene is "true". Default in Solr is "false" (since 3.6)
         使复合文件将使用更少的文件的索引,使用更少的文件描述符为代价的性能下降。默认在Lucene是'true'。默认Solr是'false'
         -->
    <useCompoundFile>false</useCompoundFile>

    <!--
        solr缓存:两个同时定义时命中较低的那个。
         ramBufferSizeMB集的数量可能使用Lucene的RAM索引缓冲文件添加和删除之前刷新到该目录。
         maxBufferedDocs集限制文件缓冲的数量在冲洗之前。
         如果ramBufferSizeMB和maxBufferedDocs设置Lucene将刷新基于任何限制是最先受到冲击。
         -->
    <ramBufferSizeMB>100</ramBufferSizeMB>
    <maxBufferedDocs>1000</maxBufferedDocs>

    <!--
         合并策略
         合并因素控制有多少段会合并。
         TieredMergePolicy,合并因子是一个方便的参数
         将MaxMergeAtOnce和SegmentsPerTier。
         LogByteSizeMergePolicy,合并因子决定有多少新领域可以合并成一个。
         默认都是10合并政策。
      -->

        <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
          <int name="maxMergeAtOnce">10</int>
          <int name="segmentsPerTier">10</int>
        </mergePolicy>
       <!--合并因子,每次合并多少个segments-->
        <mergeFactor>10</mergeFactor>

    <!--
         合并调度器
         Lucene控制合并的合并调度器
         执行。ConcurrentMergeScheduler(Lucene 2.3默认)
         可以使用单独的线程在后台执行合并。
         SerialMergeScheduler(Lucene 2.2默认)没有。
     -->
       <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>

    <!--
        设置索引库的锁方式,主要有三种:
        1.single:适用于只读的索引库,即索引库是定死的,不会再更改
        2.native:使用本地操作系统的文件锁方式,不能用于多个solr服务共用同一个索引库。Solr3.6 及后期版本使用的默认锁机制。
        3.simple:使用简单的文件锁机制
    -->
    <lockType>${solr.lock.type:native}</lockType>

    <!--
         是否启动时先解锁
         如果这是真的,开启任何写或提交锁在启动时举行。
         这违背了安全锁定机制,允许多个进程访问一个lucene索引,并且应该小心使用。默认设置是“false”。
         这是不需要如果锁类型是 'single'
     -->
    <unlockOnStartup>false</unlockOnStartup>

    <!--
         Lucene loads terms into memory 间隔
         控制频率Lucene加载到内存中
         默认是128,可能对大多数情况都有益。
      -->
    <termIndexInterval>128</termIndexInterval>

    <!--
        重新打开,替代先关闭-再打开
         如果这是truw,indexreader将重新开放(通常更有效),而不是关闭,然后打开。默认值:真正的
      -->
    <reopenReaders>true</reopenReaders>

    <!--
         提交删除策略
         可以指定自定义删除策略。类必须实现org.apache.lucene.index.IndexDeletionPolicy。
         默认Solr IndexDeletionPolicy实现支持删除索引提交点提交,优化状态。
         最新提交点应该保存regardlessof标准。
    -->
    <deletionPolicy class="solr.SolrDeletionPolicy">
      <!-- 提交的数量保持 -->
      <str name="maxCommitsToKeep">1</str>
      <!-- 优化的数量提交保存 -->
      <str name="maxOptimizedCommitsToKeep">0</str>
      <!--
          删除所有提交点一旦达到给定的有效期
        -->
         <str name="maxCommitAge">30MINUTES</str>
         <str name="maxCommitAge">1DAY</str>

    </deletionPolicy>

    <!-- Lucene Infostream

         To aid in advanced debugging, Lucene provides an "InfoStream"
         of detailed information when indexing.

         Setting The value to true will instruct the underlying Lucene
         IndexWriter to write its debugging info the specified file
      -->
     <!-- <infoStream file="INFOSTREAM.txt">false</infoStream> -->
  </indexConfig>


  <!-- JMX
       这个例子使JMX当且仅当发现现有的MBeanServer,使用这个如果你想配置JMX通过JVM参数。
       删除此禁用暴露Solr配置JMX和统计。
    -->
  <jmx />
  <!-- <jmx agentId="myAgent" /> -->
  <!-- <jmx serviceUrl="service:jmx:rmi:///jndi/rmi://localhost:9999/solr"/>-->

  <!-- The default high-performance update handler -->
  <updateHandler class="solr.DirectUpdateHandler2">

    <!--
        设置索引库更新日志,默认路径为solr home下面的data/tlog。随着索引库的频繁更新,tlog文件会越来越大,
        所以建议提交索引时采用硬提交方式<autoCommit>,即批量提交。
    -->
    <updateLog>
      <str name="dir">${solr.ulog.dir:}</str>
    </updateLog>

    <!-- 自动硬提交方式

          maxTime:设置多长时间提交一次maxDocs:设置达到多少文档提交一次openSearcher:文档提交后是否开启新的searcher,
          如果false,文档只是提交到index索引库,搜索结果中搜不到此次提交的文档;如果true,既提交到index索引库,也能在搜索结果中搜到此次提交的内容。
         提交自动在一定条件下执行困难。
         启用自动提交,而是考虑使用“commitWithin”当添加文档。
         maxDocs -最大数量的文档添加自上次提交之前自动触发一个新提交。
         maxTime——最大的时间以来,可以通过文档添加之前自动触发一个新提交。单位ms
         opensearch——如果错误,提交导致最近的指数变化刷新到稳定的存储,但不会导致一个新的搜索器被打开,使这些变化明显。
         如果启用了updateLog,那么强烈建议有一些很难自动提交日志大小限制。
      -->
     <autoCommit>
       <maxTime>15000</maxTime>
       <openSearcher>false</openSearcher>
     </autoCommit>

    <!--
         软提交VS硬提交 :只要其中一个
         softAutoCommit就像autoCommit除了它会导致一个“软”提交,只有确保变化是可见的,但不保证数据同步到磁盘。
         这是更快和更接近实时的友好比硬提交。
      -->
       <autoSoftCommit>
         <maxTime>1000</maxTime>
       </autoSoftCommit>

    <!--
         更新相关事件监听器:
             各种IndexWriter相关事件可以触发听众采取行动。
             postCommit——每次提交或优化命令之后
             postOptimize——每一个优化命令后

         RunExecutableListener执行外部命令从一个钩postCommit或postOptimize等。
         exe -运行可执行文件的名称
         dir - dir作为当前工作目录。(默认=“.”当前目录)
         wait - 调用线程等待等到可执行的回报。(默认= " true ")
         args - 传递给程序的参数。(默认没有)
         env -环境变量设置。(默认没有)
      -->
    <!--
       <listener event="postCommit" class="solr.RunExecutableListener">
         <str name="exe">solr/bin/snapshooter</str>
         <str name="dir">.</str>
         <bool name="wait">true</bool>
         <arr name="args"> <str>arg1</str> <str>arg2</str> </arr>
         <arr name="env"> <str>MYVAR=val1</str> </arr>
       </listener>
      -->

  </updateHandler>

  <!-- IndexReaderFactory

       Use the following format to specify a custom IndexReaderFactory,
       which allows for alternate IndexReader implementations.

       ** Experimental Feature **

       Please note - Using a custom IndexReaderFactory may prevent
       certain other features from working. The API to
       IndexReaderFactory may change without warning or may even be
       removed from future releases if the problems cannot be
       resolved.


       ** Features that may not work with custom IndexReaderFactory **

       The ReplicationHandler assumes a disk-resident index. Using a
       custom IndexReader implementation may cause incompatibility
       with ReplicationHandler and may cause replication to not work
       correctly. See SOLR-1366 for details.

    -->
  <!--
  <indexReaderFactory name="IndexReaderFactory" class="package.class">
    <str name="someArg">Some Value</str>
  </indexReaderFactory >
  -->
  <!-- By explicitly declaring the Factory, the termIndexDivisor can
       be specified.
    -->
  <!--
     <indexReaderFactory name="IndexReaderFactory"
                         class="solr.StandardIndexReaderFactory">
       <int name="setTermIndexDivisor">12</int>
     </indexReaderFactory >
    -->

  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
       Query section - these settings control query time things like caches
       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
  <query>
    <!-- Max Boolean Clauses

         设置boolean 查询中,最大条件数。在范围搜索或者前缀搜索时,会产生大量的 boolean 条件,
             如果条件数达到这个数值时,将抛出异常,限制这个条件数,可以防止条件过多查询等待时间过长。
      -->
    <maxBooleanClauses>1024</maxBooleanClauses>


    <!-- SOLR查询缓存

         有两个可用的缓存实现Solr,LRUCache基于同步LinkedHashMap,FastLRUCache基于一个ConcurrentHashMap。
         FastLRUCache有更快的获得和单线程的操作,因此通常将放缓速度比LRUCache当缓存的命中率很高(> 75%),并可能在其他场景在多cpu系统快。
    -->

    <!--
         过滤器缓存
         缓存被SolrIndexSearcher过滤器(DocSets),无序的* *文档匹配查询。
         当一个新搜索器打开,其缓存可能或“hit”使用数据缓存来填充在旧的搜索者。
         autowarmCount是取自预填充的项的数量。
         LRUCache,hit物品将最近访问过的物品。
         参数:
         class -LRUCache是SolrCache实现或(LRUCache或FastLRUCache)
         size - 在缓存条目的最大数量
         initialSize - 初始容量(缓存条目的数量)。(见java.util.HashMap)
         autowarmCount - 取自条目的数量从老缓存预填充。
      -->
    <filterCache class="solr.FastLRUCache"
                 size="512"
                 initialSize="512"
                 autowarmCount="0"/>

    <!--  
         查询结果缓存
         缓存的搜索结果——有序列表的文档id(DocList)基于查询,排序,和文件要求的范围。
      -->
    <queryResultCache class="solr.LRUCache"
                     size="512"
                     initialSize="512"
                     autowarmCount="0"/>

    <!--  
         文件缓存
         缓存Lucene文档对象(存储为每个文档字段)。由于Lucene内部文档id是瞬态的,这个缓存不会hit。
      -->
    <documentCache class="solr.LRUCache"
                   size="512"
                   initialSize="512"
                   autowarmCount="0"/>

    <!-- 字段值缓存
         缓存用来保存字段值快速访问的文档id。fieldValueCache创建默认情况下即使没有配置。
      -->
       <fieldValueCache class="solr.FastLRUCache"
                        size="512"
                        autowarmCount="128"
                        showItems="32" />

    <!-- 自定义缓存

         Example of a generic cache.  These caches may be accessed by
         name through SolrIndexSearcher.getCache(),cacheLookup(), and
         cacheInsert().  The purpose is to enable easy caching of
         user/application level data.  The regenerator argument should
         be specified as an implementation of solr.CacheRegenerator
         if autowarming is desired.  
      -->
    <!--
       <cache name="myUserCache"
              class="solr.LRUCache"
              size="4096"
              initialSize="1024"
              autowarmCount="1024"
              regenerator="com.mycompany.MyRegenerator"
              />
      -->


    <!--
         懒散字段加载
         如果这是真的,存储字段不要求将装载。
         这可能导致明显的速度提升如果通常的情况下是不加载所有存储字段,特别是如果跳过字段压缩的文本字段。
    -->
    <enableLazyFieldLoading>true</enableLazyFieldLoading>

   <!-- Use Filter For Sorted Query

        A possible optimization that attempts to use a filter to
        satisfy a search.  If the requested sort does not include
        score, then the filterCache will be checked for a filter
        matching the query. If found, the filter will be used as the
        source of document ids, and then the sort will be applied to
        that.

        For most situations, this will not be useful unless you
        frequently get the same search repeatedly with different sort
        options, and none of them ever use "score"
     -->
   <!--
      <useFilterForSortedQuery>true</useFilterForSortedQuery>
     -->

   <!-- Result Window Size

        An optimization for use with the queryResultCache.  When a search
        is requested, a superset of the requested number of document ids
        are collected.  For example, if a search for a particular query
        requests matching documents 10 through 19, and queryWindowSize is 50,
        then documents 0 through 49 will be collected and cached.  Any further
        requests in that range can be satisfied via the cache.  
     -->
   <queryResultWindowSize>20</queryResultWindowSize>

   <!-- Maximum number of documents to cache for any entry in the
        queryResultCache.
     -->
   <queryResultMaxDocsCached>200</queryResultMaxDocsCached>

   <!-- Query Related Event Listeners

        Various IndexSearcher related events can trigger Listeners to
        take actions.

        newSearcher - fired whenever a new searcher is being prepared
        and there is a current searcher handling requests (aka
        registered).  It can be used to prime certain caches to
        prevent long request times for certain requests.

        firstSearcher - fired whenever a new searcher is being
        prepared but there is no current registered searcher to handle
        requests or to gain autowarming data from.


     -->
    <!-- QuerySenderListener takes an array of NamedList and executes a
         local query request for each NamedList in sequence.
      -->
    <listener event="newSearcher" class="solr.QuerySenderListener">
      <arr name="queries">
        <!--
           <lst><str name="q">solr</str><str name="sort">price asc</str></lst>
           <lst><str name="q">rocks</str><str name="sort">weight asc</str></lst>
          -->
      </arr>
    </listener>
    <listener event="firstSearcher" class="solr.QuerySenderListener">
      <arr name="queries">
        <lst>
          <str name="q">static firstSearcher warming in solrconfig.xml</str>
        </lst>
      </arr>
    </listener>

    <!--
    solr默认为false。如果为true,索引文件减少,检索性能降低,追求平衡
      -->
    <useColdSearcher>false</useColdSearcher>

    <!--
         最大热搜索
         最大数量的同时在后台搜索,可能会变慢。返回一个错误如果超过这个限制。
         推荐1 - 2的值对于只读从盘,高主盘w/o缓存变暖。
      -->
    <maxWarmingSearchers>2</maxWarmingSearchers>

  </query>


  <!-- 请求转发器
        主要是介绍当有请求访问SolrCore时SolrDispatchFilter如何处理。  
        handleSelect是一个以前版本中遗留下来的属性,会影响请求的对应行为(比如/select?qt=XXX)。  
        当handleSelect="true"时导致SolrDispatchFilter将请求转发给qt指定的处理器(前提是/select已经注册)。  
        当handleSelect="false"时会直接访问/select,若/select未注册则为404。  
    -->
  <requestDispatcher handleSelect="false" >
    <!-- 请求解析
        这些设置说明Solr Requests如何被解析,以及对ContentStreams有什么限制。  
         enableRemoteStreaming - 是否允许使用stream.file和stream.url参数来指定远程streams。  
         multipartUploadLimitInKB - 指定多文件上传时Solr允许的最大的size。  
         formdataUploadLimitInKB - 表单通过POST请求发送的最大size  
        * * * * * *的警告
        下面的设置授权Solr获取远程文件,你
        应该确保你的系统有一些认证吗
        使用enableRemoteStreaming = " true "

      -->
    <requestParsers enableRemoteStreaming="true"
                    multipartUploadLimitInKB="2048000"
                    formdataUploadLimitInKB="2048"/>

    <!-- HTTP Caching
         设置HTTP缓存的相关参数。  
      -->
    <httpCaching never304="true" />
    <!-- If you include a <cacheControl> directive, it will be used to
         generate a Cache-Control header (as well as an Expires header
         if the value contains "max-age=")

         By default, no Cache-Control header is generated.

         You can use the <cacheControl> option even if you have set
         never304="true"
      -->
    <!--
       <httpCaching never304="true" >
         <cacheControl>max-age=30, public</cacheControl>
       </httpCaching>
      -->
    <!-- To enable Solr to respond with automatically generated HTTP
         Caching headers, and to response to Cache Validation requests
         correctly, set the value of never304="false"

         This will cause Solr to generate Last-Modified and ETag
         headers based on the properties of the Index.

         The following options can also be specified to affect the
         values of these headers...

         lastModFrom - the default value is "openTime" which means the
         Last-Modified value (and validation against If-Modified-Since
         requests) will all be relative to when the current Searcher
         was opened.  You can change it to lastModFrom="dirLastMod" if
         you want the value to exactly correspond to when the physical
         index was last modified.

         etagSeed="..." is an option you can change to force the ETag
         header (and validation against If-None-Match requests) to be
         different even if the index has not changed (ie: when making
         significant changes to your config file)

         (lastModifiedFrom and etagSeed are both ignored if you use
         the never304="true" option)
      -->
    <!--
       <httpCaching lastModifiedFrom="openTime"
                    etagSeed="Solr">
         <cacheControl>max-age=30, public</cacheControl>
       </httpCaching>
      -->
  </requestDispatcher>

  <!-- Request Handlers
        输入的请求会通过请求中的路径被转发到特定的处理器。
    -->
  <!-- SearchHandler

       基本的请求处理器是SearchHandler,它提供一系列SearchComponents。  
        通过multiple shards支持分布式。  
    -->
  <requestHandler name="/select" class="solr.SearchHandler">
    <!-- 可以指定默认值。-->  
     <lst name="defaults">
       <str name="echoParams">explicit</str>
       <int name="rows">10</int>
       <str name="df">text</str>
     </lst>
    <!-- 添加属性 -->  
    <!--
       <lst name="appends">
         <str name="fq">inStock:true</str>
       </lst>
      -->
    <!-- 用法同上,尽量不要使用。-->
    <!--
       <lst name="invariants">
         <str name="facet.field">cat</str>
         <str name="facet.field">manu_exact</str>
         <str name="facet.query">price:[* TO 500]</str>
         <str name="facet.query">price:[500 TO *]</str>
       </lst>
      -->
    <!-- 下面的配置可以重置SearchComponents-->  
    <!--
       <arr name="components">
         <str>nameOfCustomComponent1</str>
         <str>nameOfCustomComponent2</str>
       </arr>
      -->
    </requestHandler>

  <!-- A request handler that returns indented JSON by default -->
  <requestHandler name="/query" class="solr.SearchHandler">
     <lst name="defaults">
       <str name="echoParams">explicit</str>
       <str name="wt">json</str>
       <str name="indent">true</str>
       <str name="df">text</str>
     </lst>
  </requestHandler>


  <!-- realtime get handler, guaranteed to return the latest stored fields of
       any document, without the need to commit or open a new searcher.  The
       current implementation relies on the updateLog feature being enabled. -->
  <requestHandler name="/get" class="solr.RealTimeGetHandler">
     <lst name="defaults">
       <str name="omitHeader">true</str>
       <str name="wt">json</str>
       <str name="indent">true</str>
     </lst>
  </requestHandler>


  <!-- A Robust Example

       This example SearchHandler declaration shows off usage of the
       SearchHandler with many defaults declared

       Note that multiple instances of the same Request Handler
       (SearchHandler) can be registered multiple times with different
       names (and different init parameters)
    -->
  <requestHandler name="/browse" class="solr.SearchHandler">
     <lst name="defaults">
       <str name="echoParams">explicit</str>

       <!-- VelocityResponseWriter settings -->
       <str name="wt">velocity</str>
       <str name="v.template">browse</str>
       <str name="v.layout">layout</str>
       <str name="title">Solritas</str>

       <!-- Query settings -->
       <str name="defType">edismax</str>
       <str name="qf">
          text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
          title^10.0 description^5.0 keywords^5.0 author^2.0 resourcename^1.0
       </str>
       <str name="df">text</str>
       <str name="mm">100%</str>
       <str name="q.alt">*:*</str>
       <str name="rows">10</str>
       <str name="fl">*,score</str>

       <str name="mlt.qf">
         text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
         title^10.0 description^5.0 keywords^5.0 author^2.0 resourcename^1.0
       </str>
       <str name="mlt.fl">text,features,name,sku,id,manu,cat,title,description,keywords,author,resourcename</str>
       <int name="mlt.count">3</int>

       <!-- Faceting defaults -->
       <str name="facet">on</str>
       <str name="facet.field">cat</str>
       <str name="facet.field">manu_exact</str>
       <str name="facet.field">content_type</str>
       <str name="facet.field">author_s</str>
       <str name="facet.query">ipod</str>
       <str name="facet.query">GB</str>
       <str name="facet.mincount">1</str>
       <str name="facet.pivot">cat,inStock</str>
       <str name="facet.range.other">after</str>
       <str name="facet.range">price</str>
       <int name="f.price.facet.range.start">0</int>
       <int name="f.price.facet.range.end">600</int>
       <int name="f.price.facet.range.gap">50</int>
       <str name="facet.range">popularity</str>
       <int name="f.popularity.facet.range.start">0</int>
       <int name="f.popularity.facet.range.end">10</int>
       <int name="f.popularity.facet.range.gap">3</int>
       <str name="facet.range">manufacturedate_dt</str>
       <str name="f.manufacturedate_dt.facet.range.start">NOW/YEAR-10YEARS</str>
       <str name="f.manufacturedate_dt.facet.range.end">NOW</str>
       <str name="f.manufacturedate_dt.facet.range.gap">+1YEAR</str>
       <str name="f.manufacturedate_dt.facet.range.other">before</str>
       <str name="f.manufacturedate_dt.facet.range.other">after</str>

       <!-- Highlighting defaults -->
       <str name="hl">on</str>
       <str name="hl.fl">content features title name</str>
       <str name="hl.encoder">html</str>
       <str name="hl.simple.pre">&lt;b&gt;</str>
       <str name="hl.simple.post">&lt;/b&gt;</str>
       <str name="f.title.hl.fragsize">0</str>
       <str name="f.title.hl.alternateField">title</str>
       <str name="f.name.hl.fragsize">0</str>
       <str name="f.name.hl.alternateField">name</str>
       <str name="f.content.hl.snippets">3</str>
       <str name="f.content.hl.fragsize">200</str>
       <str name="f.content.hl.alternateField">content</str>
       <str name="f.content.hl.maxAlternateFieldLength">750</str>

       <!-- Spell checking defaults -->
       <str name="spellcheck">on</str>
       <str name="spellcheck.extendedResults">false</str>      
       <str name="spellcheck.count">5</str>
       <str name="spellcheck.alternativeTermCount">2</str>
       <str name="spellcheck.maxResultsForSuggest">5</str>      
       <str name="spellcheck.collate">true</str>
       <str name="spellcheck.collateExtendedResults">true</str>  
       <str name="spellcheck.maxCollationTries">5</str>
       <str name="spellcheck.maxCollations">3</str>           
     </lst>

     <!-- append spellchecking to our list of components -->
     <arr name="last-components">
       <str>spellcheck</str>
     </arr>
  </requestHandler>


  <!-- Update Request Handler.  

       http://wiki.apache.org/solr/UpdateXmlMessages

       The canonical Request Handler for Modifying the Index through
       commands specified using XML, JSON, CSV, or JAVABIN

       Note: Since solr1.1 requestHandlers requires a valid content
       type header if posted in the body. For example, curl now
       requires: -H 'Content-type:text/xml; charset=utf-8'

       To override the request content type and force a specific
       Content-type, use the request parameter:
         ?update.contentType=text/csv

       This handler will pick a response format to match the input
       if the 'wt' parameter is not explicit
    -->
  <requestHandler name="/update" class="solr.UpdateRequestHandler">
    <!-- See below for information on defining
         updateRequestProcessorChains that can be used by name
         on each Update Request
      -->
    <!--
       <lst name="defaults">
         <str name="update.chain">dedupe</str>
       </lst>
       -->
  </requestHandler>

  <!-- for back compat with clients using /update/json and /update/csv -->  
  <requestHandler name="/update/json" class="solr.JsonUpdateRequestHandler">
        <lst name="defaults">
         <str name="stream.contentType">application/json</str>
       </lst>
  </requestHandler>
  <requestHandler name="/update/csv" class="solr.CSVRequestHandler">
        <lst name="defaults">
         <str name="stream.contentType">application/csv</str>
       </lst>
  </requestHandler>

  <!-- Solr Cell Update Request Handler

       http://wiki.apache.org/solr/ExtractingRequestHandler

    -->
  <requestHandler name="/update/extract"
                  startup="lazy"
                  class="solr.extraction.ExtractingRequestHandler" >
    <lst name="defaults">
      <str name="lowernames">true</str>
      <str name="uprefix">ignored_</str>

      <!-- capture link hrefs but ignore div attributes -->
      <str name="captureAttr">true</str>
      <str name="fmap.a">links</str>
      <str name="fmap.div">ignored_</str>
    </lst>
  </requestHandler>


  <!-- Field Analysis Request Handler

       RequestHandler that provides much the same functionality as
       analysis.jsp. Provides the ability to specify multiple field
       types and field names in the same request and outputs
       index-time and query-time analysis for each of them.

       Request parameters are:
       analysis.fieldname - field name whose analyzers are to be used

       analysis.fieldtype - field type whose analyzers are to be used
       analysis.fieldvalue - text for index-time analysis
       q (or analysis.q) - text for query time analysis
       analysis.showmatch (true|false) - When set to true and when
           query analysis is performed, the produced tokens of the
           field value analysis will be marked as "matched" for every
           token that is produces by the query analysis
   -->
  <requestHandler name="/analysis/field"
                  startup="lazy"
                  class="solr.FieldAnalysisRequestHandler" />


  <!-- Document Analysis Handler

       http://wiki.apache.org/solr/AnalysisRequestHandler

       An analysis handler that provides a breakdown of the analysis
       process of provided documents. This handler expects a (single)
       content stream with the following format:

       <docs>
         <doc>
           <field name="id">1</field>
           <field name="name">The Name</field>
           <field name="text">The Text Value</field>
         </doc>
         <doc>...</doc>
         <doc>...</doc>
         ...
       </docs>

    Note: Each document must contain a field which serves as the
    unique key. This key is used in the returned response to associate
    an analysis breakdown to the analyzed document.

    Like the FieldAnalysisRequestHandler, this handler also supports
    query analysis by sending either an "analysis.query" or "q"
    request parameter that holds the query text to be analyzed. It
    also supports the "analysis.showmatch" parameter which when set to
    true, all field tokens that match the query tokens will be marked
    as a "match".
  -->
  <requestHandler name="/analysis/document"
                  class="solr.DocumentAnalysisRequestHandler"
                  startup="lazy" />

  <!-- Admin Handlers

       Admin Handlers - This will register all the standard admin
       RequestHandlers.  
    -->
  <requestHandler name="/admin/"
                  class="solr.admin.AdminHandlers" />
  <!-- This single handler is equivalent to the following... -->
  <!--
     <requestHandler name="/admin/luke"       class="solr.admin.LukeRequestHandler" />
     <requestHandler name="/admin/system"     class="solr.admin.SystemInfoHandler" />
     <requestHandler name="/admin/plugins"    class="solr.admin.PluginInfoHandler" />
     <requestHandler name="/admin/threads"    class="solr.admin.ThreadDumpHandler" />
     <requestHandler name="/admin/properties" class="solr.admin.PropertiesRequestHandler" />
     <requestHandler name="/admin/file"       class="solr.admin.ShowFileRequestHandler" >
    -->
  <!-- If you wish to hide files under ${solr.home}/conf, explicitly
       register the ShowFileRequestHandler using:
    -->
  <!--
     <requestHandler name="/admin/file"
                     class="solr.admin.ShowFileRequestHandler" >
       <lst name="invariants">
         <str name="hidden">synonyms.txt</str>
         <str name="hidden">anotherfile.txt</str>
       </lst>
     </requestHandler>
    -->

  <!-- ping/healthcheck -->
  <requestHandler name="/admin/ping" class="solr.PingRequestHandler">
    <lst name="invariants">
      <str name="q">solrpingquery</str>
    </lst>
    <lst name="defaults">
      <str name="echoParams">all</str>
    </lst>
    <!-- An optional feature of the PingRequestHandler is to configure the
         handler with a "healthcheckFile" which can be used to enable/disable
         the PingRequestHandler.
         relative paths are resolved against the data dir
      -->
    <!-- <str name="healthcheckFile">server-enabled.txt</str> -->
  </requestHandler>

  <!-- Echo the request contents back to the client -->
  <requestHandler name="/debug/dump" class="solr.DumpRequestHandler" >
    <lst name="defaults">
     <str name="echoParams">explicit</str>
     <str name="echoHandler">true</str>
    </lst>
  </requestHandler>

  <!-- Solr Replication

       The SolrReplicationHandler supports replicating indexes from a
       "master" used for indexing and "slaves" used for queries.

       http://wiki.apache.org/solr/SolrReplication

       It is also necessary for SolrCloud to function (in Cloud mode, the
       replication handler is used to bulk transfer segments when nodes
       are added or need to recover).

       https://wiki.apache.org/solr/SolrCloud/
    -->
  <requestHandler name="/replication" class="solr.ReplicationHandler" >
    <!--
       To enable simple master/slave replication, uncomment one of the
       sections below, depending on whether this solr instance should be
       the "master" or a "slave".  If this instance is a "slave" you will
       also need to fill in the masterUrl to point to a real machine.
    -->
    <!--
       <lst name="master">
         <str name="replicateAfter">commit</str>
         <str name="replicateAfter">startup</str>
         <str name="confFiles">schema.xml,stopwords.txt</str>
       </lst>
    -->
    <!--
       <lst name="slave">
         <str name="masterUrl">http://your-master-hostname:8983/solr</str>
         <str name="pollInterval">00:00:60</str>
       </lst>
    -->
  </requestHandler>

  <!-- Search Components

       Search components are registered to SolrCore and used by
       instances of SearchHandler (which can access them by name)

       By default, the following components are available:

       <searchComponent name="query"     class="solr.QueryComponent" />
       <searchComponent name="facet"     class="solr.FacetComponent" />
       <searchComponent name="mlt"       class="solr.MoreLikeThisComponent" />
       <searchComponent name="highlight" class="solr.HighlightComponent" />
       <searchComponent name="stats"     class="solr.StatsComponent" />
       <searchComponent name="debug"     class="solr.DebugComponent" />

       Default configuration in a requestHandler would look like:

       <arr name="components">
         <str>query</str>
         <str>facet</str>
         <str>mlt</str>
         <str>highlight</str>
         <str>stats</str>
         <str>debug</str>
       </arr>

       If you register a searchComponent to one of the standard names,
       that will be used instead of the default.

       To insert components before or after the 'standard' components, use:

       <arr name="first-components">
         <str>myFirstComponentName</str>
       </arr>

       <arr name="last-components">
         <str>myLastComponentName</str>
       </arr>

       NOTE: The component registered with the name "debug" will
       always be executed after the "last-components"

     -->

   <!-- Spell Check

        The spell check component can return a list of alternative spelling
        suggestions.  

        http://wiki.apache.org/solr/SpellCheckComponent
     -->
  <searchComponent name="spellcheck" class="solr.SpellCheckComponent">

    <str name="queryAnalyzerFieldType">text_general</str>

    <!-- Multiple "Spell Checkers" can be declared and used by this
         component
      -->

    <!-- a spellchecker built from a field of the main index -->
    <lst name="spellchecker">
      <str name="name">default</str>
      <str name="field">text</str>
      <str name="classname">solr.DirectSolrSpellChecker</str>
      <!-- the spellcheck distance measure used, the default is the internal levenshtein -->
      <str name="distanceMeasure">internal</str>
      <!-- minimum accuracy needed to be considered a valid spellcheck suggestion -->
      <float name="accuracy">0.5</float>
      <!-- the maximum #edits we consider when enumerating terms: can be 1 or 2 -->
      <int name="maxEdits">2</int>
      <!-- the minimum shared prefix when enumerating terms -->
      <int name="minPrefix">1</int>
      <!-- maximum number of inspections per result. -->
      <int name="maxInspections">5</int>
      <!-- minimum length of a query term to be considered for correction -->
      <int name="minQueryLength">4</int>
      <!-- maximum threshold of documents a query term can appear to be considered for correction -->
      <float name="maxQueryFrequency">0.01</float>
      <!-- uncomment this to require suggestions to occur in 1% of the documents
          <float name="thresholdTokenFrequency">.01</float>
      -->
    </lst>

    <!-- a spellchecker that can break or combine words.  See "/spell" handler below for usage -->
    <lst name="spellchecker">
      <str name="name">wordbreak</str>
      <str name="classname">solr.WordBreakSolrSpellChecker</str>      
      <str name="field">name</str>
      <str name="combineWords">true</str>
      <str name="breakWords">true</str>
      <int name="maxChanges">10</int>
    </lst>

    <!-- a spellchecker that uses a different distance measure -->
    <!--
       <lst name="spellchecker">
         <str name="name">jarowinkler</str>
         <str name="field">spell</str>
         <str name="classname">solr.DirectSolrSpellChecker</str>
         <str name="distanceMeasure">
           org.apache.lucene.search.spell.JaroWinklerDistance
         </str>
       </lst>
     -->

    <!-- a spellchecker that use an alternate comparator

         comparatorClass be one of:
          1. score (default)
          2. freq (Frequency first, then score)
          3. A fully qualified class name
      -->
    <!--
       <lst name="spellchecker">
         <str name="name">freq</str>
         <str name="field">lowerfilt</str>
         <str name="classname">solr.DirectSolrSpellChecker</str>
         <str name="comparatorClass">freq</str>
      -->

    <!-- A spellchecker that reads the list of words from a file -->
    <!--
       <lst name="spellchecker">
         <str name="classname">solr.FileBasedSpellChecker</str>
         <str name="name">file</str>
         <str name="sourceLocation">spellings.txt</str>
         <str name="characterEncoding">UTF-8</str>
         <str name="spellcheckIndexDir">spellcheckerFile</str>
       </lst>
      -->
  </searchComponent>

  <!-- A request handler for demonstrating the spellcheck component.  

       NOTE: This is purely as an example.  The whole purpose of the
       SpellCheckComponent is to hook it into the request handler that
       handles your normal user queries so that a separate request is
       not needed to get suggestions.

       IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
       NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!

       See http://wiki.apache.org/solr/SpellCheckComponent for details
       on the request parameters.
    -->
  <requestHandler name="/spell" class="solr.SearchHandler" startup="lazy">
    <lst name="defaults">
      <str name="df">text</str>
      <!-- Solr will use suggestions from both the 'default' spellchecker
           and from the 'wordbreak' spellchecker and combine them.
           collations (re-written queries) can include a combination of
           corrections from both spellcheckers -->
      <str name="spellcheck.dictionary">default</str>
      <str name="spellcheck.dictionary">wordbreak</str>
      <str name="spellcheck">on</str>
      <str name="spellcheck.extendedResults">true</str>      
      <str name="spellcheck.count">10</str>
      <str name="spellcheck.alternativeTermCount">5</str>
      <str name="spellcheck.maxResultsForSuggest">5</str>      
      <str name="spellcheck.collate">true</str>
      <str name="spellcheck.collateExtendedResults">true</str>  
      <str name="spellcheck.maxCollationTries">10</str>
      <str name="spellcheck.maxCollations">5</str>         
    </lst>
    <arr name="last-components">
      <str>spellcheck</str>
    </arr>
  </requestHandler>

  <!-- Term Vector Component

       http://wiki.apache.org/solr/TermVectorComponent
    -->
  <searchComponent name="tvComponent" class="solr.TermVectorComponent"/>

  <!-- A request handler for demonstrating the term vector component

       This is purely as an example.

       In reality you will likely want to add the component to your
       already specified request handlers.
    -->
  <requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
    <lst name="defaults">
      <str name="df">text</str>
      <bool name="tv">true</bool>
    </lst>
    <arr name="last-components">
      <str>tvComponent</str>
    </arr>
  </requestHandler>

  <!-- Clustering Component

       http://wiki.apache.org/solr/ClusteringComponent

       You'll need to set the solr.clustering.enabled system property
       when running solr to run with clustering enabled:

            java -Dsolr.clustering.enabled=true -jar start.jar

    -->
  <searchComponent name="clustering"
                   enable="${solr.clustering.enabled:false}"
                   class="solr.clustering.ClusteringComponent" >
    <!-- Declare an engine -->
    <lst name="engine">
      <!-- The name, only one can be named "default" -->
      <str name="name">default</str>

      <!-- Class name of Carrot2 clustering algorithm.

           Currently available algorithms are:

           * org.carrot2.clustering.lingo.LingoClusteringAlgorithm
           * org.carrot2.clustering.stc.STCClusteringAlgorithm
           * org.carrot2.clustering.kmeans.BisectingKMeansClusteringAlgorithm

           See http://project.carrot2.org/algorithms.html for the
           algorithm's characteristics.
        -->
      <str name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm</str>

      <!-- Overriding values for Carrot2 default algorithm attributes.

           For a description of all available attributes, see:
           http://download.carrot2.org/stable/manual/#chapter.components.
           Use attribute key as name attribute of str elements
           below. These can be further overridden for individual
           requests by specifying attribute key as request parameter
           name and attribute value as parameter value.
        -->
      <str name="LingoClusteringAlgorithm.desiredClusterCountBase">20</str>

      <!-- Location of Carrot2 lexical resources.

           A directory from which to load Carrot2-specific stop words
           and stop labels. Absolute or relative to Solr config directory.
           If a specific resource (e.g. stopwords.en) is present in the
           specified dir, it will completely override the corresponding
           default one that ships with Carrot2.

           For an overview of Carrot2 lexical resources, see:
           http://download.carrot2.org/head ... r.lexical-resources
        -->
      <str name="carrot.lexicalResourcesDir">clustering/carrot2</str>

      <!-- The language to assume for the documents.

           For a list of allowed values, see:
           http://download.carrot2.org/stab ... ing.defaultLanguage
       -->
      <str name="MultilingualClustering.defaultLanguage">ENGLISH</str>
    </lst>
    <lst name="engine">
      <str name="name">stc</str>
      <str name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm</str>
    </lst>
  </searchComponent>

  <!-- A request handler for demonstrating the clustering component

       This is purely as an example.

       In reality you will likely want to add the component to your
       already specified request handlers.
    -->
  <requestHandler name="/clustering"
                  startup="lazy"
                  enable="${solr.clustering.enabled:false}"
                  class="solr.SearchHandler">
    <lst name="defaults">
      <bool name="clustering">true</bool>
      <str name="clustering.engine">default</str>
      <bool name="clustering.results">true</bool>
      <!-- The title field -->
      <str name="carrot.title">name</str>
      <str name="carrot.url">id</str>
      <!-- The field to cluster on -->
       <str name="carrot.snippet">features</str>
       <!-- produce summaries -->
       <bool name="carrot.produceSummary">true</bool>
       <!-- the maximum number of labels per cluster -->
       <!--<int name="carrot.numDescriptions">5</int>-->
       <!-- produce sub clusters -->
       <bool name="carrot.outputSubClusters">false</bool>

       <str name="defType">edismax</str>
       <str name="qf">
         text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
       </str>
       <str name="q.alt">*:*</str>
       <str name="rows">10</str>
       <str name="fl">*,score</str>
    </lst>     
    <arr name="last-components">
      <str>clustering</str>
    </arr>
  </requestHandler>

  <!-- Terms Component

       http://wiki.apache.org/solr/TermsComponent

       A component to return terms and document frequency of those
       terms
    -->
  <searchComponent name="terms" class="solr.TermsComponent"/>

  <!-- A request handler for demonstrating the terms component -->
  <requestHandler name="/terms" class="solr.SearchHandler" startup="lazy">
     <lst name="defaults">
      <bool name="terms">true</bool>
      <bool name="distrib">false</bool>
    </lst>     
    <arr name="components">
      <str>terms</str>
    </arr>
  </requestHandler>


  <!-- Query Elevation Component

       http://wiki.apache.org/solr/QueryElevationComponent

       a search component that enables you to configure the top
       results for a given query regardless of the normal lucene
       scoring.
    -->
  <searchComponent name="elevator" class="solr.QueryElevationComponent" >
    <!-- pick a fieldType to analyze queries -->
    <str name="queryFieldType">string</str>
    <str name="config-file">elevate.xml</str>
  </searchComponent>

  <!-- A request handler for demonstrating the elevator component -->
  <requestHandler name="/elevate" class="solr.SearchHandler" startup="lazy">
    <lst name="defaults">
      <str name="echoParams">explicit</str>
      <str name="df">text</str>
    </lst>
    <arr name="last-components">
      <str>elevator</str>
    </arr>
  </requestHandler>

  <!-- Highlighting Component

       http://wiki.apache.org/solr/HighlightingParameters
    -->
  <searchComponent class="solr.HighlightComponent" name="highlight">
    <highlighting>
      <!-- Configure the standard fragmenter -->
      <!-- This could most likely be commented out in the "default" case -->
      <fragmenter name="gap"
                  default="true"
                  class="solr.highlight.GapFragmenter">
        <lst name="defaults">
          <int name="hl.fragsize">100</int>
        </lst>
      </fragmenter>

      <!-- A regular-expression-based fragmenter
           (for sentence extraction)
        -->
      <fragmenter name="regex"
                  class="solr.highlight.RegexFragmenter">
        <lst name="defaults">
          <!-- slightly smaller fragsizes work better because of slop -->
          <int name="hl.fragsize">70</int>
          <!-- allow 50% slop on fragment sizes -->
          <float name="hl.regex.slop">0.5</float>
          <!-- a basic sentence pattern -->
          <str name="hl.regex.pattern">[-\w ,/\n\&quot;&apos;]{20,200}</str>
        </lst>
      </fragmenter>

      <!-- Configure the standard formatter -->
      <formatter name="html"
                 default="true"
                 class="solr.highlight.HtmlFormatter">
        <lst name="defaults">
          <str name="hl.simple.pre"><![CDATA[<em>]]></str>
          <str name="hl.simple.post"><![CDATA[</em>]]></str>
        </lst>
      </formatter>

      <!-- Configure the standard encoder -->
      <encoder name="html"
               class="solr.highlight.HtmlEncoder" />

      <!-- Configure the standard fragListBuilder -->
      <fragListBuilder name="simple"
                       class="solr.highlight.SimpleFragListBuilder"/>

      <!-- Configure the single fragListBuilder -->
      <fragListBuilder name="single"
                       class="solr.highlight.SingleFragListBuilder"/>

      <!-- Configure the weighted fragListBuilder -->
      <fragListBuilder name="weighted"
                       default="true"
                       class="solr.highlight.WeightedFragListBuilder"/>

      <!-- default tag FragmentsBuilder -->
      <fragmentsBuilder name="default"
                        default="true"
                        class="solr.highlight.ScoreOrderFragmentsBuilder">
        <!--
        <lst name="defaults">
          <str name="hl.multiValuedSeparatorChar">/</str>
        </lst>
        -->
      </fragmentsBuilder>

      <!-- multi-colored tag FragmentsBuilder -->
      <fragmentsBuilder name="colored"
                        class="solr.highlight.ScoreOrderFragmentsBuilder">
        <lst name="defaults">
          <str name="hl.tag.pre"><![CDATA[
               <b style="background:yellow">,<b style="background:lawgreen">,
               <b style="background:aquamarine">,<b style="background:magenta">,
               <b style="background:palegreen">,<b style="background:coral">,
               <b style="background:wheat">,<b style="background:khaki">,
               <b style="background:lime">,<b style="background:deepskyblue">]]></str>
          <str name="hl.tag.post"><![CDATA[</b>]]></str>
        </lst>
      </fragmentsBuilder>

      <boundaryScanner name="default"
                       default="true"
                       class="solr.highlight.SimpleBoundaryScanner">
        <lst name="defaults">
          <str name="hl.bs.maxScan">10</str>
          <str name="hl.bs.chars">.,!? &#9;&#10;&#13;</str>
        </lst>
      </boundaryScanner>

      <boundaryScanner name="breakIterator"
                       class="solr.highlight.BreakIteratorBoundaryScanner">
        <lst name="defaults">
          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->
          <str name="hl.bs.type">WORD</str>
          <!-- language and country are used when constructing Locale object.  -->
          <!-- And the Locale object will be used when getting instance of BreakIterator -->
          <str name="hl.bs.language">en</str>
          <str name="hl.bs.country">US</str>
        </lst>
      </boundaryScanner>
    </highlighting>
  </searchComponent>

  <!-- Update Processors

       Chains of Update Processor Factories for dealing with Update
       Requests can be declared, and then used by name in Update
       Request Processors

       http://wiki.apache.org/solr/UpdateRequestProcessor

    -->
  <!-- Deduplication

       An example dedup update processor that creates the "id" field
       on the fly based on the hash code of some other fields.  This
       example has overwriteDupes set to false since we are using the
       id field as the signatureField and Solr will maintain
       uniqueness based on that anyway.  

    -->
  <!--
     <updateRequestProcessorChain name="dedupe">
       <processor class="solr.processor.SignatureUpdateProcessorFactory">
         <bool name="enabled">true</bool>
         <str name="signatureField">id</str>
         <bool name="overwriteDupes">false</bool>
         <str name="fields">name,features,cat</str>
         <str name="signatureClass">solr.processor.Lookup3Signature</str>
       </processor>
       <processor class="solr.LogUpdateProcessorFactory" />
       <processor class="solr.RunUpdateProcessorFactory" />
     </updateRequestProcessorChain>
    -->

  <!-- Language identification

       This example update chain identifies the language of the incoming
       documents using the langid contrib. The detected language is
       written to field language_s. No field name mapping is done.
       The fields used for detection are text, title, subject and description,
       making this example suitable for detecting languages form full-text
       rich documents injected via ExtractingRequestHandler.
       See more about langId at http://wiki.apache.org/solr/LanguageDetection
    -->
    <!--
     <updateRequestProcessorChain name="langid">
       <processor class="org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory">
         <str name="langid.fl">text,title,subject,description</str>
         <str name="langid.langField">language_s</str>
         <str name="langid.fallback">en</str>
       </processor>
       <processor class="solr.LogUpdateProcessorFactory" />
       <processor class="solr.RunUpdateProcessorFactory" />
     </updateRequestProcessorChain>
    -->

  <!-- Script update processor

    This example hooks in an update processor implemented using JavaScript.

    See more about the script update processor at http://wiki.apache.org/solr/ScriptUpdateProcessor
  -->
  <!--
    <updateRequestProcessorChain name="script">
      <processor class="solr.StatelessScriptUpdateProcessorFactory">
        <str name="script">update-script.js</str>
        <lst name="params">
          <str name="config_param">example config parameter</str>
        </lst>
      </processor>
      <processor class="solr.RunUpdateProcessorFactory" />
    </updateRequestProcessorChain>
  -->

  <!-- Response Writers

       http://wiki.apache.org/solr/QueryResponseWriter

       Request responses will be written using the writer specified by
       the 'wt' request parameter matching the name of a registered
       writer.

       The "default" writer is the default and will be used if 'wt' is
       not specified in the request.
    -->
  <!-- The following response writers are implicitly configured unless
       overridden...
    -->
  <!--
     <queryResponseWriter name="xml"
                          default="true"
                          class="solr.XMLResponseWriter" />
     <queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
     <queryResponseWriter name="python" class="solr.PythonResponseWriter"/>
     <queryResponseWriter name="ruby" class="solr.RubyResponseWriter"/>
     <queryResponseWriter name="php" class="solr.PHPResponseWriter"/>
     <queryResponseWriter name="phps" class="solr.PHPSerializedResponseWriter"/>
     <queryResponseWriter name="csv" class="solr.CSVResponseWriter"/>
     <queryResponseWriter name="schema.xml" class="solr.SchemaXmlResponseWriter"/>
    -->

  <queryResponseWriter name="json" class="solr.JSONResponseWriter">
     <!-- For the purposes of the tutorial, JSON responses are written as
      plain text so that they are easy to read in *any* browser.
      If you expect a MIME type of "application/json" just remove this override.
     -->
    <str name="content-type">text/plain; charset=UTF-8</str>
  </queryResponseWriter>

  <!--
     Custom response writers can be declared as needed...
    -->
    <queryResponseWriter name="velocity" class="solr.VelocityResponseWriter" startup="lazy"/>


  <!-- XSLT response writer transforms the XML output by any xslt file found
       in Solr's conf/xslt directory.  Changes to xslt files are checked for
       every xsltCacheLifetimeSeconds.  
    -->
  <queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
    <int name="xsltCacheLifetimeSeconds">5</int>
  </queryResponseWriter>

  <!-- Query Parsers

       http://wiki.apache.org/solr/SolrQuerySyntax

       Multiple QParserPlugins can be registered by name, and then
       used in either the "defType" param for the QueryComponent (used
       by SearchHandler) or in LocalParams
    -->
  <!-- example of registering a query parser -->
  <!--
     <queryParser name="myparser" class="com.mycompany.MyQParserPlugin"/>
    -->

  <!-- Function Parsers

       http://wiki.apache.org/solr/FunctionQuery

       Multiple ValueSourceParsers can be registered by name, and then
       used as function names when using the "func" QParser.
    -->
  <!-- example of registering a custom function parser  -->
  <!--
     <valueSourceParser name="myfunc"
                        class="com.mycompany.MyValueSourceParser" />
    -->


  <!-- Document Transformers
       http://wiki.apache.org/solr/DocTransformers
    -->
  <!--
     Could be something like:
     <transformer name="db" class="com.mycompany.LoadFromDatabaseTransformer" >
       <int name="connection">jdbc://....</int>
     </transformer>

     To add a constant value to all docs, use:
     <transformer name="mytrans2" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
       <int name="value">5</int>
     </transformer>

     If you want the user to still be able to change it with _value:something_ use this:
     <transformer name="mytrans3" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
       <double name="defaultValue">5</double>
     </transformer>

      If you are using the QueryElevationComponent, you may wish to mark documents that get boosted.  The
      EditorialMarkerFactory will do exactly that:
     <transformer name="qecBooster" class="org.apache.solr.response.transform.EditorialMarkerFactory" />
    -->


  <!-- Legacy config for the admin interface -->
  <admin>
    <defaultQuery>*:*</defaultQuery>
  </admin>

</config>



运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-67943-1-1.html 上篇帖子: solr添加中文IK分词器,以及配置自定义词库 下篇帖子: solr查询语法详解 配置文件
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表