End Plate

                                    ~ ~ ~ * ~ ~ ~

                                            __

                              .-----.-----.|  |.----.

                              |__ --|  _  ||  ||   _|

                              |_.-----.-----.--.--|  |

                                |  _  |  _  |     |  |

                                |   __|___._|__|__|__|

                                |__|     ... .-..

                                    ~ ~ ~ * ~ ~ ~

"Parting is such sweet sorrow"


Romeo And Juliet
Act 2,
Scene 2,
176-185


Getting Started With Synapticloop Panl
A rather pleasing companion to the Apache® Solr® Faceted Search Engine.


[1] Thanks for checking out this footnote.

[2] At the time of writing no useful results were returned by any of the large search engines for 'Solr Panl' :).

[3] 'Sensibly' is a bit of a vague term... Panl strips out any unexpected characters and ensures that it is valid.  For example, if Solr (and therefore) Panl is expecting an integer parameter and the value 5gs6 is passed through, Panl will remove any non-numeric characters and parse the number - returning 56.  For values that cannot be converted, the value will be ignored and not passed through to the Solr server.

[4] A LPSE length of 3 with the five mandatory codes would provide 185,193 facets, a length of 4 would provide 10,556,001

[5] A LPSE length of 3 with the five mandatory codes and one optional code would provide 175,616 facets, a length of 4 would provide 9,834,496

[6] Sigh... depending on memory, processing power etc.

[7] There was a delay from the start of writing this book, to the implementation of specific Solr search fields.  Whilst the functionality could have been shoe-horned into the mechanical pencils collection example, it didn't quite work with the fields available, and to do so would be a little contrived.  However it did make it into the Bookstore walkthrough example.

[8] Examples with specific dates are notoriously hard to put into examples as by the time you read this book, the example dates will be well out of range.  However, there is an example data set (simple-date) which is included within the release package which has random dates spanning +/- 10 years from the writing of this book which can be used to test out the features, however you will need to index the data set with separate commands.  There is a utility included in the distribution that will generate sample data from +/- 10 years which can then be re-indexed.

[9] This is probably not the fairest of comparisons, as a lot of the underlying Solr query implementation could be hidden behind the scenes anyhow.  However, what Panl can do is automatically have CaFUPs for multiple FieldSets, facets, and queries which will automatically build the query, the returned facets, the fields, and more.

[10] The exception to this rule are any defined OR facets, which will increase the number of results that are returned.

[11] When using Apache Solr version 10, the minimum version of Java will be 21

[12] The Solr query boosting (e.g. ^4) designator is available, it just isn't available through a passed in query parameter, it is configured in Panl when a specific field is searched.

[13] Our recommendation is to use a Java version of at least 21 as Solr 10 will have this as a minimum requirement.

[14] The example data 'techproducts' included with the Apache Solr instance is a reasonable test dataset, however, the way the schema and collections are designed places an emphasis more on testing ingestion and searching, rather than on a functional search set.

[15] I am using an Apple Macintosh system, but it is the same for most flavours of Linux.

[16] Commands weren't included in this as a recursive force (i.e. rm -rf or rmdir /S /Q) deletion of directories can be a very dangerous thing.

[17] This book does not use the Solr schema version of 1.7 despite the fact that Solr version 9.8.0 uses this schema version.

[18] Your configuration file and Solr version will define an XML element with this format, line numbers are not provided as they change frequently between versions.

[19] This can be seen in the filesystem indexing walkthrough which was added after the initial book and Panl server release.

[20] Historically, Java based examples for servers seem to have been based on the ubiquitous Pet Store, time for something new...

[21] I am cheating a little here, as the indexed book title is actually "Mary's Angel", however I have deliberately made the title "Mary 's Angel" (with an additional space in the title). This allows a match, else I would have to explain word-stemming and Solr query matching which is beyond the scope of this book.

[22] Or, the mistakes that were made with the implementation.

[23] Ranges are available in Solr on a StrField, and may work in Panl, however the implementation has not been tested.

[24] This type is generally analysed and used for keyword searches and highlighting.  You could have a prefix and suffix for this field type, however, for any TextField that has a large amount of text in it, selecting this field as a facet with a prefix and suffix may be too long for the URL.

[25] This is not true for managed schema versions of 1.7 - when using this version analysed fields CANNOT BE set as facets - although there is a workaround to do this.

[26] Just to be clear, the Solr field must be analysed and stored.  This does not mean all analysed fields and all stored fields.

[27] In some instances, the properties file layout would have been better suited to JSON, however, comments are not allowed in JSON files, which makes explaining the file a lot harder.  Admittedly, HJSON (or some alternative_ could have been used, and parsed on the way into Panl, but this would reduce portability - sigh - these are the decisions which can reverberate through time and code.

[28] This started off as a simple way to test the Panl configuration and how it interacted with the Solr search server, over time, it became a little more complex.  It also became an incredibly useful tool when adding features to the returned JSON object so that integration and implementation became a more developer friendly experience.

[29] Once again, a simple explainer turned into a more complex application as time and requirements became more involved.

[30] Of course, an or separator could have been used as well.

[31]Admittedly this is rather annoying having to know the value ranges ahead of time, however there are some niceties built into Panl to use the minimum and maximum values.

[32]Once again, annoying to have to know the values.

[33] The number of books in the dataset is growing over time, so these numbers may not reflect your Panl Results Viewer.

[34] Note: this is for version 9 of the Apache Solr server, previous versions may have a different JSON response object.   The in-build Panl Results Viewer web app caters for the current and supported previous versions.

[35] Here 'Author' is the name of the document attribute, the actual Solr field name is text_author.

[36] I say 'when all it does' almost dismissively - when in fact it has so many configuration options for fields and FieldSets that the total effort and thought that went into the code is not insignificant.

[37] The current thinking is some sort of horrible regex type experience, which, as they say "When you have a problem and think 'I know, I will use regular expressions'.  Now you have two problems".

[38] Although German users should be fine with the default implementation from Panl.