mongonr日期对应java,Java QueryScorer類代碼示例-程序员宅基地

技术标签: mongonr日期对应java  

本文整理匯總了Java中org.apache.lucene.search.highlight.QueryScorer類的典型用法代碼示例。如果您正苦於以下問題:Java QueryScorer類的具體用法?Java QueryScorer怎麽用?Java QueryScorer使用的例子?那麽恭喜您, 這裏精選的類代碼示例或許可以為您提供幫助。

QueryScorer類屬於org.apache.lucene.search.highlight包,在下文中一共展示了QueryScorer類的25個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於我們的係統推薦出更棒的Java代碼示例。

示例1: getHighlightString

​點讚 3

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

public static String getHighlightString (String text, String keyword) throws IOException {

TermQuery query = new TermQuery(new Term("f", keyword));

QueryScorer scorer = new QueryScorer(query);

SimpleHTMLFormatter formatter = new SimpleHTMLFormatter("","");

Highlighter highlighter = new Highlighter(formatter, scorer);

Fragmenter fragmenter = new SimpleFragmenter(50);

highlighter.setTextFragmenter(fragmenter);

TokenStream tokenStream = new StandardAnalyzer(Version.LUCENE_20).tokenStream("f", new StringReader(text));

//String result = highlighter.getBestFragments(tokenStream, text, 30, "...");

StringBuilder writer = new StringBuilder("");

writer.append("");

writer.append("

".highlight {\n" +

" background: yellow;\n" +

"}\n" +

"");

writer.append("

");

writer.append("");

writer.append("");

return ( writer.toString() );

}

開發者ID:CoEIA,項目名稱:DEM,代碼行數:24,

示例2: searToHighlighterCss

​點讚 3

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

/**

* ����

* @param analyzer

* @param searcher

* @throws IOException

* @throws InvalidTokenOffsetsException

*/

public void searToHighlighterCss(Analyzer analyzer,IndexSearcher searcher) throws IOException, InvalidTokenOffsetsException{

Term term =new Term("Content", new String("免費".getBytes(),"GBK"));//��ѯ��������˼����Ҫ�����Ա�Ϊ���������

TermQuery query =new TermQuery(term);

TopDocs docs =searcher.search(query, 10);//����

/**�Զ����ע�����ı���ǩ*/

SimpleHTMLFormatter formatter = new SimpleHTMLFormatter("","");

/**����QueryScorer*/

QueryScorer scorer=new QueryScorer(query);

/**����Fragmenter*/

Fragmenter fragmenter = new SimpleSpanFragmenter(scorer);

Highlighter highlight=new Highlighter(formatter,scorer);

highlight.setTextFragmenter(fragmenter);

for(ScoreDoc doc:docs.scoreDocs){//��ȡ���ҵ��ĵ����������

Document document =searcher.doc(doc.doc);

String value = document.getField("Content").toString();

TokenStream tokenStream = analyzer.tokenStream("Content", new StringReader(value));

String str1 = highlight.getBestFragment(tokenStream, value);

System.out.println(str1);

}

}

開發者ID:Smalinuxer,項目名稱:Rearchor,代碼行數:30,

示例3: getHighlighterList

​點讚 3

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

private List getHighlighterList(List highlightRequests, Query q) {

if (highlightRequests.isEmpty()) {

return Collections.emptyList();

}

List highlighterList = new ArrayList<>();

for (HighlightRequest highlight : highlightRequests) {

QueryScorer queryScorer = new QueryScorer(q, highlight.getField());

queryScorer.setExpandMultiTermQuery(true);

Fragmenter fragmenter = new SimpleSpanFragmenter(queryScorer, highlight.getFragmentLength());

SimpleHTMLFormatter simpleHTMLFormatter = new SimpleHTMLFormatter(highlight.getPreTag(), highlight.getPostTag());

LumongoHighlighter highlighter = new LumongoHighlighter(simpleHTMLFormatter, queryScorer, highlight);

highlighter.setTextFragmenter(fragmenter);

highlighterList.add(highlighter);

}

return highlighterList;

}

開發者ID:lumongo,項目名稱:lumongo,代碼行數:20,

示例4: testHits

​點讚 3

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

public void testHits() throws Exception {

IndexSearcher searcher = new IndexSearcher(TestUtil.getBookIndexDirectory());

TermQuery query = new TermQuery(new Term("title", "action"));

TopDocs hits = searcher.search(query, 10);

QueryScorer scorer = new QueryScorer(query, "title");

Highlighter highlighter = new Highlighter(scorer);

highlighter.setTextFragmenter(new SimpleSpanFragmenter(scorer));

Analyzer analyzer = new SimpleAnalyzer();

for (ScoreDoc sd : hits.scoreDocs) {

StoredDocument doc = searcher.doc(sd.doc);

String title = doc.get("title");

TokenStream stream = TokenSources.getAnyTokenStream(searcher.getIndexReader(), sd.doc, "title", doc,

analyzer);

String fragment = highlighter.getBestFragment(stream, title);

LOGGER.info(fragment);

}

}

開發者ID:xuzhikethinker,項目名稱:t4f-data,代碼行數:23,

示例5: testHighlightPhrase

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

public void testHighlightPhrase() throws Exception {

Query query = new PhraseQuery.Builder()

.add(new Term("field", "foo"))

.add(new Term("field", "bar"))

.build();

QueryScorer queryScorer = new CustomQueryScorer(query);

org.apache.lucene.search.highlight.Highlighter highlighter = new org.apache.lucene.search.highlight.Highlighter(queryScorer);

String[] frags = highlighter.getBestFragments(new MockAnalyzer(random()), "field", "bar foo bar foo", 10);

assertArrayEquals(new String[] {"bar foo bar foo"}, frags);

}

開發者ID:justor,項目名稱:elasticsearch_my,代碼行數:11,

示例6: displayHtmlHighlight

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

static String displayHtmlHighlight(Query query, Analyzer analyzer, String fieldName, String fieldContent,

int fragmentSize) throws IOException, InvalidTokenOffsetsException {

Highlighter highlighter = new Highlighter(new SimpleHTMLFormatter("", ""),

new QueryScorer(query));

Fragmenter fragmenter = new SimpleFragmenter(fragmentSize);

highlighter.setTextFragmenter(fragmenter);

return highlighter.getBestFragment(analyzer, fieldName, fieldContent);

}

開發者ID:tedyli,項目名稱:Tedyli-Searcher,代碼行數:9,

示例7: search

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

public static void search(String indexDir, String q) throws Exception {

Directory dir = FSDirectory.open(Paths.get(indexDir));

IndexReader reader = DirectoryReader.open(dir);

IndexSearcher is = new IndexSearcher(reader);

// Analyzer analyzer=new StandardAnalyzer(); // 標準分詞器

SmartChineseAnalyzer analyzer = new SmartChineseAnalyzer();

QueryParser parser = new QueryParser("desc", analyzer);

Query query = parser.parse(q);

long start = System.currentTimeMillis();

TopDocs hits = is.search(query, 10);

long end = System.currentTimeMillis();

System.out.println("匹配 " + q + " ,總共花費" + (end - start) + "毫秒" + "查詢到" + hits.totalHits + "個記錄");

QueryScorer scorer = new QueryScorer(query);

Fragmenter fragmenter = new SimpleSpanFragmenter(scorer);

SimpleHTMLFormatter simpleHTMLFormatter = new SimpleHTMLFormatter("", "");

Highlighter highlighter = new Highlighter(simpleHTMLFormatter, scorer);

highlighter.setTextFragmenter(fragmenter);

for (ScoreDoc scoreDoc : hits.scoreDocs) {

Document doc = is.doc(scoreDoc.doc);

System.out.println(doc.get("city"));

System.out.println(doc.get("desc"));

String desc = doc.get("desc");

if (desc != null) {

TokenStream tokenStream = analyzer.tokenStream("desc", new StringReader(desc));

System.out.println(highlighter.getBestFragment(tokenStream, desc));

}

}

reader.close();

}

開發者ID:MiniPa,項目名稱:cjs_ssms,代碼行數:32,

示例8: search

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

@Override

@SuppressWarnings("unchecked")

public List search(Paging paging, String q) throws Exception {

FullTextSession fullTextSession = Search.getFullTextSession(super.session());

SearchFactory sf = fullTextSession.getSearchFactory();

QueryBuilder qb = sf.buildQueryBuilder().forEntity(PostPO.class).get();

org.apache.lucene.search.Query luceneQuery = qb.keyword().onFields("title","summary","tags").matching(q).createQuery();

FullTextQuery query = fullTextSession.createFullTextQuery(luceneQuery);

query.setFirstResult(paging.getFirstResult());

query.setMaxResults(paging.getMaxResults());

StandardAnalyzer standardAnalyzer = new StandardAnalyzer();

SimpleHTMLFormatter formatter = new SimpleHTMLFormatter("", "");

QueryScorer queryScorer = new QueryScorer(luceneQuery);

Highlighter highlighter = new Highlighter(formatter, queryScorer);

List list = query.list();

List rets = new ArrayList<>(list.size());

for (PostPO po : list) {

Post m = BeanMapUtils.copy(po, 0);

// 處理高亮

String title = highlighter.getBestFragment(standardAnalyzer, "title", m.getTitle());

String summary = highlighter.getBestFragment(standardAnalyzer, "summary", m.getSummary());

if (StringUtils.isNotEmpty(title)) {

m.setTitle(title);

}

if (StringUtils.isNotEmpty(summary)) {

m.setSummary(summary);

}

rets.add(m);

}

paging.setTotalCount(query.getResultSize());

return rets;

}

開發者ID:ThomasYangZi,項目名稱:mblog,代碼行數:41,

示例9: HighlightingHelper

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

HighlightingHelper(Query query, Analyzer analyzer) {

this.analyzer = analyzer;

Formatter formatter = new SimpleHTMLFormatter();

Encoder encoder = new MinimalHTMLEncoder();

scorer = new QueryScorer(query);

highlighter = new Highlighter(formatter, encoder, scorer);

fragmentLength = DEFAULT_FRAGMENT_LENGTH;

Fragmenter fragmenter = new SimpleSpanFragmenter(scorer, fragmentLength);

highlighter.setTextFragmenter(fragmenter);

}

開發者ID:lukhnos,項目名稱:lucenestudy,代碼行數:13,

示例10: getBenchmarkHighlighter

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

@Override

protected BenchmarkHighlighter getBenchmarkHighlighter(Query q){

highlighter = new Highlighter(new SimpleHTMLFormatter(), new QueryScorer(q));

highlighter.setMaxDocCharsToAnalyze(maxDocCharsToAnalyze);

return new BenchmarkHighlighter(){

@Override

public int doHighlight(IndexReader reader, int doc, String field,

Document document, Analyzer analyzer, String text) throws Exception {

TokenStream ts = TokenSources.getAnyTokenStream(reader, doc, field, document, analyzer);

TextFragment[] frag = highlighter.getBestTextFragments(ts, text, mergeContiguous, maxFrags);

return frag != null ? frag.length : 0;

}

};

}

開發者ID:europeana,項目名稱:search,代碼行數:15,

示例11: getBenchmarkHighlighter

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

@Override

public BenchmarkHighlighter getBenchmarkHighlighter(Query q) {

highlighter = new Highlighter(new SimpleHTMLFormatter(), new QueryScorer(q));

return new BenchmarkHighlighter() {

@Override

public int doHighlight(IndexReader reader, int doc, String field, Document document, Analyzer analyzer, String text) throws Exception {

TokenStream ts = TokenSources.getAnyTokenStream(reader, doc, field, document, analyzer);

TextFragment[] frag = highlighter.getBestTextFragments(ts, text, mergeContiguous, maxFrags);

numHighlightedResults += frag != null ? frag.length : 0;

return frag != null ? frag.length : 0;

}

};

}

開發者ID:europeana,項目名稱:search,代碼行數:14,

示例12: createHighlighter

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

public static Object createHighlighter(Query query,String highlightBegin,String highlightEnd) {

return new Highlighter(

//new SimpleHTMLFormatter("",""),

new SimpleHTMLFormatter(highlightBegin,highlightEnd),

new QueryScorer(query));

}

開發者ID:lucee,項目名稱:Lucee4,代碼行數:9,

示例13: searchCorpus

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

/**

* Searches the current corpus using the search terms in the search field.

*/

private void searchCorpus() {

if (search.getText().trim().equals("")) return;

try {

indexSearcher = guess.getSelected() != null ?

getIndex(getDiffCorpus(gold.getSelected(), guess.getSelected())) :

getIndex(gold.getSelected());

//System.out.println("Searching...");

QueryParser parser = new QueryParser("Word", analyzer);

Query query = parser.parse(search.getText());

Hits hits = indexSearcher.search(query);

Highlighter highlighter = new Highlighter(new QueryScorer(query));

DefaultListModel model = new DefaultListModel();

for (int i = 0; i < hits.length(); i++) {

Document hitDoc = hits.doc(i);

int nr = Integer.parseInt(hitDoc.get(""));

//System.out.println(hitDoc.get(""));

String best = null;

for (Object field : hitDoc.getFields()) {

Field f = (Field) field;

best = highlighter.getBestFragment(analyzer, f.name(), hitDoc.get(f.name()));

if (best != null) break;

}

if (best != null)

model.addElement(new Result(nr, "" + nr + ":" + best + ""));

//System.out.println(highlighter.getBestFragment(analyzer, "Word", hitDoc.get("Word")));

//assertEquals("This is the text to be indexed.", hitDoc.get("fieldname"));

}

results.setModel(model);

repaint();

} catch (Exception ex) {

ex.printStackTrace();

}

}

開發者ID:riedelcastro,項目名稱:whatswrong,代碼行數:37,

示例14: createHighlighter

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

protected Highlighter createHighlighter(org.apache.lucene.search.Query luceneQuery) {

SimpleHTMLFormatter format = new SimpleHTMLFormatter("", "");

Highlighter highlighter = new Highlighter(format, new QueryScorer(luceneQuery));// 高亮

// highlighter.setTextFragmenter(new

// SimpleFragmenter(Integer.MAX_VALUE));

highlighter.setTextFragmenter(new SimpleFragmenter(200));

return highlighter;

}

開發者ID:mixaceh,項目名稱:openyu-commons,代碼行數:9,

示例15: doHighlight

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

/**

* Highlight (bold,color) query words in result-document. Set HighlightResult for content or description.

*

* @param query

* @param analyzer

* @param doc

* @param resultDocument

* @throws IOException

*/

private void doHighlight(final Query query, final Analyzer analyzer, final Document doc, final ResultDocument resultDocument) throws IOException {

final Highlighter highlighter = new Highlighter(new SimpleHTMLFormatter(HIGHLIGHT_PRE_TAG, HIGHLIGHT_POST_TAG), new QueryScorer(query));

// Get 3 best fragments of content and seperate with a "..."

try {

// highlight content

final String content = doc.get(AbstractOlatDocument.CONTENT_FIELD_NAME);

TokenStream tokenStream = analyzer.tokenStream(AbstractOlatDocument.CONTENT_FIELD_NAME, new StringReader(content));

String highlightResult = highlighter.getBestFragments(tokenStream, content, 3, HIGHLIGHT_SEPARATOR);

// if no highlightResult is in content => look in description

if (highlightResult.length() == 0) {

final String description = doc.get(AbstractOlatDocument.DESCRIPTION_FIELD_NAME);

tokenStream = analyzer.tokenStream(AbstractOlatDocument.DESCRIPTION_FIELD_NAME, new StringReader(description));

highlightResult = highlighter.getBestFragments(tokenStream, description, 3, HIGHLIGHT_SEPARATOR);

resultDocument.setHighlightingDescription(true);

}

resultDocument.setHighlightResult(highlightResult);

// highlight title

final String title = doc.get(AbstractOlatDocument.TITLE_FIELD_NAME);

tokenStream = analyzer.tokenStream(AbstractOlatDocument.TITLE_FIELD_NAME, new StringReader(title));

final String highlightTitle = highlighter.getBestFragments(tokenStream, title, 3, " ");

resultDocument.setHighlightTitle(highlightTitle);

} catch (final InvalidTokenOffsetsException e) {

log.warn("", e);

}

}

開發者ID:huihoo,項目名稱:olat,代碼行數:37,

示例16: getResult

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

public String getResult(String fieldName, String fieldValue) throws Exception{

BuguIndex index = BuguIndex.getInstance();

QueryParser parser = new QueryParser(index.getVersion(), fieldName, index.getAnalyzer());

Query query = parser.parse(keywords);

TokenStream tokens = index.getAnalyzer().tokenStream(fieldName, new StringReader(fieldValue));

QueryScorer scorer = new QueryScorer(query, fieldName);

Highlighter highlighter = new Highlighter(formatter, scorer);

highlighter.setTextFragmenter(new SimpleSpanFragmenter(scorer));

return highlighter.getBestFragments(tokens, fieldValue, maxFragments, "...");

}

開發者ID:xbwen,項目名稱:bugu-mongo,代碼行數:11,

示例17: testHighlighting

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

public void testHighlighting() throws Exception {

String text = "The quick brown fox jumps over the lazy dog";

TermQuery query = new TermQuery(new Term("field", "fox"));

TokenStream tokenStream = new SimpleAnalyzer().tokenStream("field", new StringReader(text));

QueryScorer scorer = new QueryScorer(query, "field");

Fragmenter fragmenter = new SimpleSpanFragmenter(scorer);

Highlighter highlighter = new Highlighter(scorer);

highlighter.setTextFragmenter(fragmenter);

assertEquals("The quick brown fox jumps over the lazy dog",

highlighter.getBestFragment(tokenStream, text));

}

開發者ID:xuzhikethinker,項目名稱:t4f-data,代碼行數:15,

示例18: searchData

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

private String searchData(String key) throws IOException, ParseException, InvalidTokenOffsetsException {

Directory directory = FSDirectory.open(new File(filePath));

IndexSearcher indexSearcher = new IndexSearcher(directory);

QueryParser queryParser = new QueryParser(Version.LUCENE_31, "foods",

new SmartChineseAnalyzer(Version.LUCENE_31, true));

//queryParser.setDefaultOperator(Operator.AND);

Query query = queryParser.parse(key);

TopDocs docs = indexSearcher.search(query, 10);

QueryScorer queryScorer = new QueryScorer(query, "foods");

Highlighter highlighter = new Highlighter(queryScorer);

highlighter.setTextFragmenter(new SimpleSpanFragmenter(queryScorer));

List searchResults = new ArrayList();

if (docs != null) {

for (ScoreDoc scoreDoc : docs.scoreDocs) {

Document doc = indexSearcher.doc(scoreDoc.doc);

TokenStream tokenStream = TokenSources.getAnyTokenStream(

indexSearcher.getIndexReader(), scoreDoc.doc, "foods", doc,

new SmartChineseAnalyzer(Version.LUCENE_31, true));

SearchResult searchResult = new SearchResult();

searchResult.setRestaurantId(Long.valueOf(doc.get("id")));

searchResult.setRestaurantName(doc.get("restaurant_name"));

searchResult.setKey(key);

searchResult.setFoods(Arrays.asList(highlighter.

getBestFragment(tokenStream, doc.get("foods")).split(" ")));

searchResults.add(searchResult);

}

} else {

searchResults = null;

}

indexSearcher.close();

directory.close();

return new Gson().toJson(searchResults);

}

開發者ID:tensorchen,項目名稱:rrs,代碼行數:42,

示例19: highlight

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

/**

* NOTE: This method will not preserve the correct field types.

*

* @param preTag

* @param postTag

*/

public static Document highlight(int docId, Document document, Query query, FieldManager fieldManager,

IndexReader reader, String preTag, String postTag) throws IOException, InvalidTokenOffsetsException {

String fieldLessFieldName = fieldManager.getFieldLessFieldName();

Query fixedQuery = fixSuperQuery(query, null, fieldLessFieldName);

Analyzer analyzer = fieldManager.getAnalyzerForQuery();

SimpleHTMLFormatter htmlFormatter = new SimpleHTMLFormatter(preTag, postTag);

Document result = new Document();

for (IndexableField f : document) {

String name = f.name();

if (fieldLessFieldName.equals(name) || FIELDS_NOT_TO_HIGHLIGHT.contains(name)) {

result.add(f);

continue;

}

String text = f.stringValue();

Number numericValue = f.numericValue();

Query fieldFixedQuery;

if (fieldManager.isFieldLessIndexed(name)) {

fieldFixedQuery = fixSuperQuery(query, name, fieldLessFieldName);

} else {

fieldFixedQuery = fixedQuery;

}

if (numericValue != null) {

if (shouldNumberBeHighlighted(name, numericValue, fieldFixedQuery)) {

String numberHighlight = preTag + text + postTag;

result.add(new StringField(name, numberHighlight, Store.YES));

}

} else {

Highlighter highlighter = new Highlighter(htmlFormatter, new QueryScorer(fieldFixedQuery, name));

TokenStream tokenStream = TokenSources.getAnyTokenStream(reader, docId, name, analyzer);

TextFragment[] frag = highlighter.getBestTextFragments(tokenStream, text, false, 10);

for (int j = 0; j < frag.length; j++) {

if ((frag[j] != null) && (frag[j].getScore() > 0)) {

result.add(new StringField(name, frag[j].toString(), Store.YES));

}

}

}

}

return result;

}

開發者ID:apache,項目名稱:incubator-blur,代碼行數:52,

示例20: main

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

public static void main(String[] args) throws Exception{

ApplicationContext applicationContext=new ClassPathXmlApplicationContext("applicationContext.xml");

SessionFactory sessionFactory = applicationContext.getBean("hibernate4sessionFactory",SessionFactory.class);

FullTextSession fullTextSession = Search.getFullTextSession(sessionFactory.openSession());

//使用Hibernate Search api查詢 從多個字段匹配 name、description、authors.name

//QueryBuilder qb = fullTextEntityManager.getSearchFactory().buildQueryBuilder().forEntity(Book.class ).get();

//Query luceneQuery = qb.keyword().onFields("name","description","authors.name").matching("移動互聯網").createQuery();

//使用lucene api查詢 從多個字段匹配 name、description、authors.name

//使用庖丁分詞器

MultiFieldQueryParser queryParser=new MultiFieldQueryParser(Version.LUCENE_36, new String[]{"name","description","authors.name"}, new PaodingAnalyzer());

Query luceneQuery=queryParser.parse("實戰");

FullTextQuery fullTextQuery =fullTextSession.createFullTextQuery(luceneQuery, Book.class);

//設置每頁顯示多少條

fullTextQuery.setMaxResults(5);

//設置當前頁

fullTextQuery.setFirstResult(0);

//高亮設置

SimpleHTMLFormatter formatter=new SimpleHTMLFormatter("", "");

QueryScorer queryScorer=new QueryScorer(luceneQuery);

Highlighter highlighter=new Highlighter(formatter, queryScorer);

@SuppressWarnings("unchecked")

List resultList = fullTextQuery.list();

System.out.println("共查找到["+resultList.size()+"]條記錄");

for (Book book : resultList) {

String highlighterString=null;

Analyzer analyzer=new PaodingAnalyzer();

try {

//高亮name

highlighterString=highlighter.getBestFragment(analyzer, "name", book.getName());

if(highlighterString!=null){

book.setName(highlighterString);

}

//高亮authors.name

Set authors = book.getAuthors();

for (Author author : authors) {

highlighterString=highlighter.getBestFragment(analyzer, "authors.name", author.getName());

if(highlighterString!=null){

author.setName(highlighterString);

}

}

//高亮description

highlighterString=highlighter.getBestFragment(analyzer, "description", book.getDescription());

if(highlighterString!=null){

book.setDescription(highlighterString);

}

} catch (Exception e) {

}

System.out.println("書名:"+book.getName()+"\n描述:"+book.getDescription()+"\n出版日期:"+book.getPublicationDate());

System.out.println("----------------------------------------------------------");

}

fullTextSession.close();

sessionFactory.close();

}

開發者ID:v5developer,項目名稱:maven-framework-project,代碼行數:62,

示例21: query

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

@Override

public QueryResult query(String keyword, int start, int pagesize,Analyzer analyzer,String...field) throws Exception{

QueryResult queryResult=new QueryResult();

List books=new ArrayList();

FullTextSession fullTextSession = Search.getFullTextSession(getSession());

//使用Hibernate Search api查詢 從多個字段匹配 name、description、authors.name

//QueryBuilder qb = fullTextSession.getSearchFactory().buildQueryBuilder().forEntity(Book.class ).get();

//Query luceneQuery = qb.keyword().onFields(field).matching(keyword).createQuery();

//使用lucene api查詢 從多個字段匹配 name、description、authors.name

MultiFieldQueryParser queryParser=new MultiFieldQueryParser(Version.LUCENE_36,new String[]{"name","description","authors.name"}, analyzer);

Query luceneQuery=queryParser.parse(keyword);

FullTextQuery fullTextQuery = fullTextSession.createFullTextQuery(luceneQuery);

int searchresultsize = fullTextQuery.getResultSize();

queryResult.setSearchresultsize(searchresultsize);

System.out.println("共查找到["+searchresultsize+"]條記錄");

fullTextQuery.setFirstResult(start);

fullTextQuery.setMaxResults(pagesize);

//設置按id排序

fullTextQuery.setSort(new Sort(new SortField("id", SortField.INT ,true)));

//高亮設置

SimpleHTMLFormatter formatter=new SimpleHTMLFormatter("", "");

QueryScorer queryScorer=new QueryScorer(luceneQuery);

Highlighter highlighter=new Highlighter(formatter, queryScorer);

@SuppressWarnings("unchecked")

List tempresult = fullTextQuery.list();

for (Book book : tempresult) {

String highlighterString=null;

try {

//高亮name

highlighterString=highlighter.getBestFragment(analyzer, "name", book.getName());

if(highlighterString!=null){

book.setName(highlighterString);

}

//高亮authors.name

Set authors = book.getAuthors();

for (Author author : authors) {

highlighterString=highlighter.getBestFragment(analyzer, "authors.name", author.getName());

if(highlighterString!=null){

author.setName(highlighterString);

}

}

//高亮description

highlighterString=highlighter.getBestFragment(analyzer, "description", book.getDescription());

if(highlighterString!=null){

book.setDescription(highlighterString);

}

} catch (Exception e) {

}

books.add(book);

System.out.println("書名:"+book.getName()+"\n描述:"+book.getDescription()+"\n出版日期:"+book.getPublicationDate());

System.out.println("----------------------------------------------------------");

}

queryResult.setSearchresult(books);

return queryResult;

}

開發者ID:v5developer,項目名稱:maven-framework-project,代碼行數:72,

示例22: main

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

public static void main(String[] args) throws Exception{

ApplicationContext applicationContext=new ClassPathXmlApplicationContext("applicationContext.xml");

EntityManagerFactory entityManagerFactory = applicationContext.getBean("entityManagerFactory",EntityManagerFactory.class);

FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(entityManagerFactory.createEntityManager());

//使用Hibernate Search api查詢 從多個字段匹配 name、description、authors.name

//QueryBuilder qb = fullTextEntityManager.getSearchFactory().buildQueryBuilder().forEntity(Book.class ).get();

//Query luceneQuery = qb.keyword().onFields("name","description","authors.name").matching("移動互聯網").createQuery();

//使用lucene api查詢 從多個字段匹配 name、description、authors.name

//使用庖丁分詞器

MultiFieldQueryParser queryParser=new MultiFieldQueryParser(Version.LUCENE_36, new String[]{"name","description","authors.name"}, new PaodingAnalyzer());

Query luceneQuery=queryParser.parse("實戰");

FullTextQuery fullTextQuery =fullTextEntityManager.createFullTextQuery(luceneQuery, Book.class);

//設置每頁顯示多少條

fullTextQuery.setMaxResults(5);

//設置當前頁

fullTextQuery.setFirstResult(0);

//高亮設置

SimpleHTMLFormatter formatter=new SimpleHTMLFormatter("", "");

QueryScorer queryScorer=new QueryScorer(luceneQuery);

Highlighter highlighter=new Highlighter(formatter, queryScorer);

@SuppressWarnings("unchecked")

List resultList = fullTextQuery.getResultList();

for (Book book : resultList) {

String highlighterString=null;

Analyzer analyzer=new PaodingAnalyzer();

try {

//高亮name

highlighterString=highlighter.getBestFragment(analyzer, "name", book.getName());

if(highlighterString!=null){

book.setName(highlighterString);

}

//高亮authors.name

Set authors = book.getAuthors();

for (Author author : authors) {

highlighterString=highlighter.getBestFragment(analyzer, "authors.name", author.getName());

if(highlighterString!=null){

author.setName(highlighterString);

}

}

//高亮description

highlighterString=highlighter.getBestFragment(analyzer, "description", book.getDescription());

if(highlighterString!=null){

book.setDescription(highlighterString);

}

} catch (Exception e) {

}

}

fullTextEntityManager.close();

entityManagerFactory.close();

}

開發者ID:v5developer,項目名稱:maven-framework-project,代碼行數:60,

示例23: query

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

@Override

public QueryResult query(String keyword, int start, int pagesize,Analyzer analyzer,String...field) throws Exception{

QueryResult queryResult=new QueryResult();

List books=new ArrayList();

FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(em);

//使用Hibernate Search api查詢 從多個字段匹配 name、description、authors.name

//QueryBuilder qb = fullTextSession.getSearchFactory().buildQueryBuilder().forEntity(Book.class ).get();

//Query luceneQuery = qb.keyword().onFields(field).matching(keyword).createQuery();

//使用lucene api查詢 從多個字段匹配 name、description、authors.name

MultiFieldQueryParser queryParser=new MultiFieldQueryParser(Version.LUCENE_36,new String[]{"name","description","authors.name"}, analyzer);

Query luceneQuery=queryParser.parse(keyword);

FullTextQuery fullTextQuery = fullTextEntityManager.createFullTextQuery(luceneQuery);

int searchresultsize = fullTextQuery.getResultSize();

queryResult.setSearchresultsize(searchresultsize);

fullTextQuery.setFirstResult(start);

fullTextQuery.setMaxResults(pagesize);

//設置按id排序

fullTextQuery.setSort(new Sort(new SortField("id", SortField.INT ,true)));

//高亮設置

SimpleHTMLFormatter formatter=new SimpleHTMLFormatter("", "");

QueryScorer queryScorer=new QueryScorer(luceneQuery);

Highlighter highlighter=new Highlighter(formatter, queryScorer);

@SuppressWarnings("unchecked")

List tempresult = fullTextQuery.getResultList();

for (Book book : tempresult) {

String highlighterString=null;

try {

//高亮name

highlighterString=highlighter.getBestFragment(analyzer, "name", book.getName());

if(highlighterString!=null){

book.setName(highlighterString);

}

//高亮authors.name

Set authors = book.getAuthors();

for (Author author : authors) {

highlighterString=highlighter.getBestFragment(analyzer, "authors.name", author.getName());

if(highlighterString!=null){

author.setName(highlighterString);

}

}

//高亮description

highlighterString=highlighter.getBestFragment(analyzer, "description", book.getDescription());

if(highlighterString!=null){

book.setDescription(highlighterString);

}

} catch (Exception e) {

}

books.add(book);

}

queryResult.setSearchresult(books);

return queryResult;

}

開發者ID:v5developer,項目名稱:maven-framework-project,代碼行數:68,

示例24: main

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

public static void main(String[] args) throws Exception {

if (args.length != 0) {

QUERY = args[0];

}

// 將庖丁封裝成符合Lucene要求的Analyzer規範

Analyzer analyzer = new PaodingAnalyzer();

//讀取本類目錄下的text.txt文件

String content = ContentReader.readText(English.class);

//接下來是標準的Lucene建立索引和檢索的代碼

Directory ramDir = new RAMDirectory();

IndexWriter writer = new IndexWriter(ramDir, analyzer);

Document doc = new Document();

Field fd = new Field(FIELD_NAME, content, Field.Store.YES,

Field.Index.TOKENIZED, Field.TermVector.WITH_POSITIONS_OFFSETS);

doc.add(fd);

writer.addDocument(doc);

writer.optimize();

writer.close();

IndexReader reader = IndexReader.open(ramDir);

String queryString = QUERY;

QueryParser parser = new QueryParser(FIELD_NAME, analyzer);

Query query = parser.parse(queryString);

Searcher searcher = new IndexSearcher(ramDir);

query = query.rewrite(reader);

System.out.println("Searching for: " + query.toString(FIELD_NAME));

Hits hits = searcher.search(query);

BoldFormatter formatter = new BoldFormatter();

Highlighter highlighter = new Highlighter(formatter, new QueryScorer(

query));

highlighter.setTextFragmenter(new SimpleFragmenter(50));

for (int i = 0; i < hits.length(); i++) {

String text = hits.doc(i).get(FIELD_NAME);

int maxNumFragmentsRequired = 5;

String fragmentSeparator = "...";

TermPositionVector tpv = (TermPositionVector) reader

.getTermFreqVector(hits.id(i), FIELD_NAME);

TokenStream tokenStream = TokenSources.getTokenStream(tpv);

String result = highlighter.getBestFragments(tokenStream, text,

maxNumFragmentsRequired, fragmentSeparator);

System.out.println("\n" + result);

}

reader.close();

}

開發者ID:no8899,項目名稱:paoding-for-lucene-2.4,代碼行數:48,

示例25: main

​點讚 2

import org.apache.lucene.search.highlight.QueryScorer; //導入依賴的package包/類

public static void main(String[] args) throws Exception {

if (args.length != 0) {

QUERY = args[0];

}

// 將庖丁封裝成符合Lucene要求的Analyzer規範

Analyzer analyzer = new PaodingAnalyzer();

//讀取本類目錄下的text.txt文件

String content = ContentReader.readText(Chinese.class);

//接下來是標準的Lucene建立索引和檢索的代碼

Directory ramDir = new RAMDirectory();

IndexWriter writer = new IndexWriter(ramDir, analyzer);

Document doc = new Document();

Field fd = new Field(FIELD_NAME, content, Field.Store.YES,

Field.Index.TOKENIZED, Field.TermVector.WITH_POSITIONS_OFFSETS);

doc.add(fd);

writer.addDocument(doc);

writer.optimize();

writer.close();

IndexReader reader = IndexReader.open(ramDir);

String queryString = QUERY;

QueryParser parser = new QueryParser(FIELD_NAME, analyzer);

Query query = parser.parse(queryString);

Searcher searcher = new IndexSearcher(ramDir);

query = query.rewrite(reader);

System.out.println("Searching for: " + query.toString(FIELD_NAME));

Hits hits = searcher.search(query);

BoldFormatter formatter = new BoldFormatter();

Highlighter highlighter = new Highlighter(formatter, new QueryScorer(

query));

highlighter.setTextFragmenter(new SimpleFragmenter(50));

for (int i = 0; i < hits.length(); i++) {

String text = hits.doc(i).get(FIELD_NAME);

int maxNumFragmentsRequired = 5;

String fragmentSeparator = "...";

TermPositionVector tpv = (TermPositionVector) reader

.getTermFreqVector(hits.id(i), FIELD_NAME);

TokenStream tokenStream = TokenSources.getTokenStream(tpv);

String result = highlighter.getBestFragments(tokenStream, text,

maxNumFragmentsRequired, fragmentSeparator);

System.out.println("\n" + result);

}

reader.close();

}

開發者ID:no8899,項目名稱:paoding-for-lucene-2.4,代碼行數:48,

注:本文中的org.apache.lucene.search.highlight.QueryScorer類示例整理自Github/MSDocs等源碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/weixin_42502601/article/details/115844458

智能推荐

kettle调用API--HTTP client,HTTP post和REST client组件的使用_kettle http client-程序员宅基地

文章浏览阅读1.1w次。kettle调用API----HTTP client,HTTP post和REST client组件的使用1.开发流程简介:第一步,选择Generate rows组件,可用来配置api的URL或者参数.这里需要特别注意,虽然api组件里可以直接设置URL和参数,但是他们并不是输入流组件,所以这里一定要存在一个类似Generate rows的输入流组件.第二步,根据需求选择合适的api组件,这里以HTTP client组件为例:这里可以在URL一栏输入具体的URL,也可以勾选Accept UR_kettle http client

python2.x和python3.x-matplotlib中文显示为方块-中文不显示-故障原理研究与解决_wxpython程序运行显示的图标是黑色方块-程序员宅基地

文章浏览阅读2.8k次。matplot的字体问题,有以下3种方式一种是从pylab中进行全局管理,可以管理任意实验相关的字体,可以是和matplot无关的实验的字体问题的管理一种是matplot的配置文件,进行全局管理一种是.py文件中临时加入配置语句网上具体的解决方案很多,但是我们会发现拿来用的时候,有时候见效,有时候又不见效,到底咋回事?注意一点,linux系统支持的中文字体≠matplotlib支持的中文..._wxpython程序运行显示的图标是黑色方块

靠谱iOS开发满足的条件--下-程序员宅基地

文章浏览阅读1.2k次。21. 下面的代码输出什么? @implementation Son : Father - (id)init { self = [super init]; if (self) { NSLog(@"%@", NSStringFromClass([self class])); NSLo

Altium Designer 发现的机密_physical designators和logical designators-程序员宅基地

文章浏览阅读1.2k次。protell被收购以后开发出的软件就改名为altium designer了,altium designer又被称为DXP,比之前的protel...Altium Designer 发现的机密进入电子设个世界,PCB是少不了的东西,刚开始画板子的时候,感觉好神奇。那个时候用的是Altium Designer Summer 08 ,现在用的是Altium Designer_physical designators和logical designators

H5video 上传预览图片视频,设置、预览视频某秒的海报帧-程序员宅基地

文章浏览阅读1.3k次。当一收到上传图片视频并可以动态设置视频显示的海报帧的需求时,主要想的是怎么样解析视频并获取保存每帧的图片,百度出来的大多是类似下面这种需要播放video并点击截图的,或者是用php ffmpeg扩展,跟需求不一致,有点抓狂了,然后就先做了视频图片的预览功能,进而对设置海报帧换了种思路,通过输入设置video开始播放的时间,取消自动播放和..._微信公众号网页h5 video 视频预览图

如何安装WordPress主题-程序员宅基地

文章浏览阅读1.6k次。在Envato Tuts +的代码类别中,我们涵盖了很多 内容。 范围从诸如如何使用一些最新JavaScript框架编写代码到如何通过诸如WordPress和OpenCart之类的内容管理系统实现某些目的 。 因为我们倾向于将大部分内容集中在已经对计算有一定程度的了解的人们,所以我们经常假设在开始教程之前已经做好了某些准备。 例如,有时我们假设您在进入实际框架之前已经掌握JavaS..._wordpress 主题 安装端口

随便推点

设置tomcat启动时间,解决tomcat服务器启动超时问题_tomcat设置了超时无效果-程序员宅基地

文章浏览阅读2.9k次。MyEclipse搭配tomcat服务器的web开发中,有时候项目比较大而我们的设备性能较低,启动服务器就有可能遇到启动超时问题,我们可以到MyEclipse的工作空间下的头tomcat的service.xml文件修改一下启动时长就行了,下面是我的工作空间为例:\Workspaces\MyEclipse 2016 CI\.metadata\.plugins\org.eclipse.wst.se_tomcat设置了超时无效果

【数字信号调制】基于matlab模拟数字基带通信仿真-程序员宅基地

文章浏览阅读326次。作者简介:热爱科研的Matlab仿真开发者,修心和技术同步精进,matlab项目合作可私信。????个人主页:Matlab科研工作室????个人信条:格物致知。更多Matlab完整代码及仿真定制内容点击????智能优化算法 神经网络预测 雷达通信 无线传感器 电力系统信号处理 ..._matlab信号分析仿真

Oracle 10g bigfile表空间简介-程序员宅基地

文章浏览阅读116次。Oracle10gbigfile表空间简介01.ABigfile表空间包含一个非常大的数据文件02.SMALLFILE表空间和BIGFILE表空间可以在同一个数据库共存1.创建一个bigfile表空间SQL>CREATEBIGFILETABLESPACEbig..._bigfile size

Python编程求:一个球从100m高度自由落下,每次落地后反跳回原高度的一半,再落下,反弹,求在第十次落地时,共经过多少米,第十次反弹多高_python一球从100米高度自由落下,每次落地后反跳回原高度的一半;再落下,求它在第10-程序员宅基地

文章浏览阅读2.1w次,点赞13次,收藏49次。sn = 100hn = sn/2for n in range(2,10): sn = sn + 2*hn hn = hn/2print("第10次落地共经过:",sn,"米")print("第10次反弹",hn,"米高")_python一球从100米高度自由落下,每次落地后反跳回原高度的一半;再落下,求它在第10

江苏大学计算机专业学硕复试,江苏大学(专业学位)计算机技术考研难吗-程序员宅基地

文章浏览阅读364次。考研真题资料优惠价原价选择很多考生在准备江苏大学(专业学位)计算机技术考研难吗?是考研报考的时候都会产生这样的疑问:这个专业的研究生好吗?适合我吗?对我以后的人生和职业会有帮助吗?考生在准备江苏大学(专业学位)计算机技术专业考研的时候产生这样的疑问是十分正常的。【手机访问】但是考生应当明确的一点是,不论是哪一个专业的研究生,都不存在绝对的好与不好的评价标准。别人适合的专业不一定适合自己,需要考生对..._江苏大学计算机复试难吗

【GRU回归预测】基于麻雀算法优化注意力机制卷积神经网络结合双向门控循环单元SSA-Attention-CNN-BiGRU实现数据多维输入多输出预测附matlab代码-程序员宅基地

文章浏览阅读6次。本文提出了一种改进的麻雀算法优化卷积-长短期记忆神经网络结合SE注意力机制ISSA-CNN-BiGRU-Attention的多输入多输出回归预测模型,用于解决复杂非线性时序数据的预测问题。该模型通过改进麻雀算法优化超参数,提高了ISSA-CNN-BiGRU-Attention模型的预测精度和鲁棒性。实验结果表明,改进的麻雀算法优化ISSA-CNN-BiGRU-Attention模型在多个数据集上取得了优异的预测性能,优于其他常用预测模型。