A big part of coding in Solr is learning about Solr Server itself, and how to control the content of the data we're developing against. This post explains how we're populating and refreshing the Indexed Data of both Solr Http Server and Solr Embedded in NixMash Spring.
We're going to cover these points:
Below is the command for starting up the Solr Http Server. We're using 4.10.4 and not a Solr 5.x release because Spring Boot complains about dependency issues, even if using the 5.x Solr-Core and Solr-SolrJ libraries. There is probably a solution I haven't found yet, but for now I'm crying Uncle and staying with Solr 4.x with Spring.
The -e default loads the Default Solr Example Collection with the name Collection1.
The Collection will initially have no documents in it, which is different from Solr 5.x which populates the index on startup.
We're loading a variety of documents into the example database which include products, manufacturers, currency and books. I want to filter documents by type in my Spring Queries so added a doctype text field in the Solr Schema and to all XML, JSON and CSV documents imported into the database.
The doctype field in a document import record.
The doctype schema.xml field addition.
The custom import documents and schema.xml are found in the NixMash Spring root /install/solr directory.
The post.jar is from Solr for adding documents. You can copy the schema.xml into your Collection1/conf directory. I'm highlighting a refreshSolr.sh bash script which does three things.
This is what it looks like when running the refreshSolr.sh script in Eclipse.
Indexes ready for coding!