XML, or text.
|
|
<META NAME="DC.subject" CONTENT="digital libraries"> |
<meta2> Some Content </meta2> |
Then, to inform Swish-e about the existence of the meta name in your documents, edit the line in your configuration file:
MetaNames DC.subject meta1 meta2 |
When searching you can now limit some or all search terms to that MetaName. For example, to look for documents that contain the word apple and also have either fruit or cooking in the DC.subject meta tag.
[ TOC ]
A document property is typically data that describes the document. For example, properties might include a document's path name, its last modified date, its title, or its size. Swish-e stores a document's properties in the index file, and they can be reported back in search results.
Swish-e also uses properties for sorting. You may sort your results by one or more properties, in ascending or descending order.
Properties can also be defined within your documents. HTML and XML files can specify tags (see previous question) as properties. The contents of these tags can then be returned with search results. These user-defined properties can also be used for sorting search results.
For example, if you had the following in your documents
<meta name="creator" content="accounting department"> |
and creator
is defined as a property (see PropertyNames in
SWISH-CONFIG) Swish-e can return accounting department
with the result for that document.
swish-e -w foo -p creator |
Or for sorting:
swish-e -w foo -s creator |
[ TOC ]
MetaNames allows keywords searches in your documents. That is, you can use MetaNames to restrict searches to just parts of your documents.
PropertyNames, on the other hand, define text that can be returned with results, and can be used for sorting.
Both use meta tags found in your documents (as shown in the above two questions) to define the text you wish to use as a property or meta name.
You may define a tag as both a property and a meta name. For example:
<meta name="creator" content="accounting department"> |
placed in your documents and then using configuration settings of:
PropertyNames creator MetaNames creator |
will allow you to limit your searches to documents created by accounting:
swish-e -w 'foo and creator=(accounting)' |
That will find all documents with the word foo
that also have a creator meta tag that contains the word accounting
. This is using MetaNames.
And you can also say:
swish-e -w foo -p creator |
which will return all documents with the word foo
, but the results will also include the contents of the creator
meta tag along with results. This is using properties.
You can use properties and meta names at the same time, too:
swish-e -w creator=(accounting or marketing) -p creator -s creator |
That searches only in the creator
meta name for either of the words
accounting
or marketing
, prints out the contents of the contents of the creator
property, and sorts the results by the creator
property name.
(See also the -x
output format switch in SWISH-RUN.)
[ TOC ]
No. This will require much work to change. But, Swish-e works with eight
Bit characters, so many characters sets can be used. Note that it does call
the ANSI-C tolower()
function which does depend on the current
locale setting. See locale(7)
for more information.
[ TOC ]
[ TOC ]
Currently, there is not a configuration directive to include a file that contains a list of files to index. But, there is a directive to include another configuration file.
IncludeConfigFile /path/to/other/config |
And in /path/to/other/config
you can say:
IndexDir file1 file2 file3 file4 file5 ... IndexDir file20 file21 file22 |
You may also specify more than one configuration file on the command line:
./swish-e -c config_one config_two config_three |
Another option is to create a directory with symbolic links of the files to index, and index just that directory.
[ TOC ]
Swish can parse HTML, XML, and text documents. The parser is set by associating a file extension with a parser by the IndexContents directive. You may set the default parser with the DefaultContents directive. If a document is not assigned a parser it will default to the
You may use Filters or an external program to convert documents to HTML,
[ TOC ]
Yes. Starting with version 2.2 Swish-e indexes to temporary files, and then renames the files when indexing is complete. On most systems renames are atomic. But, since Swish-e also generates more than one file during indexing there will be a very short period of time between renaming the various files when the index is out of sync.
Settings in src/config.h control some options related to temporary files, and their use during indexing.
[ TOC ]
Phrases are indexed automatically. To search for a phrase simply place double quotes around the phrase.
For example:
swish-e -w 'free and "fast search engine"' |
[ TOC ]
Use the BumpPositionCounterCharacters configuration directive.
[ TOC ]
There are a number of configuration parameters that control what Swish-e considers a ``word'' and it has a debugging feature to help pinpoint any indexing problems.
Configuration file directives (SWISH-CONFIG)
WordCharacters, BeginCharacters, EndCharacters
,
IgnoreFirstChar, and IgnoreLastChar are the main settings that Swish-e uses to define a ``word''. See SWISH-CONFIG and
SWISH-RUN for details.
Swish-e also uses compile-time defaults for many settings. These are located in src/config.h file.
Use of the command line arguments -k
, -v
and -T
are useful when debugging these problems. Using -T INDEXED_WORDS
while indexing will display each word as it is indexed. You should specify
one file when using this feature since it can generate a lot of output.
./swish-e -c my.conf -i problem.file -T INDEXED_WORDS |
You may also wish to index a single file that contains words that are or are not indexing as you expect and use -T to output debugging information about the index. A useful command might be:
./swish-e -f index.swish-e -T INDEX_FULL |
Once you see how Swish-e is parsing and indexing your words, you can adjust the configuration settings mentioned above to control what words are indexed.
Another useful command might be:
./swish-e -c my.conf -i problem.file -T PARSED_WORDS INDEXED_WORDS |
This will show white-spaced words parsed from the document (PARSED_WORDS), and how those words are split up into separate words for indexing (INDEXED_WORDS).
[ TOC ]
Swish-e indexes words as defined by the WordCharacters setting, as described above. So to avoid indexing numbers you simply remove digits from the WordCharacters setting.
There are also some settings in src/config.h that control what ``words'' are indexed. You can configure swish to never index words that are all digits, vowels, or consonants, or that contain more than some consecutive number of digits, vowels, or consonants. In general, you won't need to change these settings.
Also, there's an experimental feature called IgnoreNumberChars which allows you to define a set of characters that describe a number. If a word is made up of only those characters it will not be indexed.
[ TOC ]
This shouldn't happen. If it does please post to the Swish-e discussion list the details so it can be reproduced by the developers.
In the mean time, you can use a FileRules directive to exclude the particular file name, or pathname, or its title. If there are serious problems in indexing certain types of files, they may not have valid text in them (they may be binary files, for instance). You can use NoContents to exclude that type of file.
Swish-e will issue a warning if an embedded null character is found in a
document. This warning will be an indication that you are trying to index
binary data. If you need to index binary files try to find a program that
will extract out the text (e.g. strings(1),
catdoc(1),
pdftotext(1)).
[ TOC ]
When using the file system to index your files you can use the
FileRules directive. Other than FileRules title
, FileRules
only works with the file system (-S fs
) indexing method, not with
-S prog
or -S http
.
If you are spidering, use a robots.text file in your document root. This is a standard way to excluded files from search engines, and is fully supported by Swish-e. See http://www.robotstxt.org/
You can also modify the spider.pl spider perl program to skip, index content only, or spider only listed web
pages. Type perldoc spider.pl
in the prog-bin
directory for details.
Robots Exclusion in your documents:
<meta name="robots" content="noindex"> |
See the obeyRobotsNoIndex directive.
[ TOC ]
To prevent Swish-e from indexing a common header, footer, or navigation
HTML tag around the text you wish to ignore and use the
IgnoreMetaTags directive. This will generate an error message if the ParserWarningLevel
is set as it's invalid HTML.
parser), but not with documents parsed by the text (TXT) parser.
the following comments in your documents to prevent indexing:
<!-- SwishCommand noindex --> <!-- SwishCommand index --> |
and/or these may be used also:
<!-- noindex --> <!-- index --> |
[ TOC ]
Use the ReplaceRules configuration directive to rewrite path names and URLs. If you are using -S prog
input method you may set the path to any string.
[ TOC ]
Use the ``prog'' document source method of indexing. Write a program to
extract out the data from your database, and format it as XML, HTML, or
text. See the examples in the prog-bin
directory, and the next question.
[ TOC ]
Swish-e can internally only parse HTML, XML and TXT (text) files by default, but can make use of filters that will convert other types of files such as MS Word documents, PDF, or gzipped files into one of the file types that Swish-e understands.
Please see SWISH-CONFIG
and the examples in the filters and filter-bin
directory for more information.
The FileFilter directive can be used in any of the input methods, but the -S http method (spidering with the SwishSpider program) only indexes ...) so filter programs can not be used to convert documents such as PDF files.
Another option is to use the prog document source input method. In this case you write a program (such as a
perl script) that will read and convert your data as needed and then output
one of the formats that Swish-e understands. Examples of using the prog input method for filtering are included in the prog-bin
directory of the Swish-e distribution.
The disadvantage of using the prog input method is that you must write a program that reads the documents from the source (e.g. from the file system or via a spider to read files on a web server), and also include the code to filter the documents. It's much easier to use the FileFilter option since the filter can often be implemented with just a single configuration directive.
On the other hand, the advantage of using the prog input method for indexing is speed. Filtering within a prog input method program will be faster if your filtering program is something like a Perl script (something that has a large start-up cost). This may or may not be an issue for you, depending on how much time your indexing requires.
You can also use a combination of methods. For example, say you are indexing a directory that contains PDF files using a FileFilter directive. Now you want to index a MySQL database that also contains PDF files. You can write a prog input method program to read your MySQL database and use the same FileFilter configuration parameter (and filter program) to convert the PDF files into one of the native Swish-e formats (TXT, HTML, XML).
Do note that it will be slower to use the FileFilter method instead of running the filter directly from the prog input method program. When FileFilter is used with the prog input method Swish-e must create a temporary file containing the output from your prog method program, and then execute the filter program.
In general, use the FileFilter method to filter documents. If indexing speed is an issue, consider writing a prog input method program. If you are already using the prog method, then filtering will probably be best accomplished within that program.
Here's two examples of how to run a filter program, one using Swish-e's
FileFilter directive, another using a prog input method program. These filters simply use the program /bin/cat
as a filter and only indexes .html files.
First, using the FileFilter method, here's the entire configuration file (swish.conf):
IndexDir . IndexOnly .html FileFilter .html "/bin/cat" "'%p'" |
and index with the command
swish-e -c swish.conf -v 1 |
Now, the same thing with using the prog document source input method and a Perl program called catfilter.pl. You can see that's it's much more work than using the FileFilter method above, but provides a place to do additional processing. In this example, the prog method is only slightly faster. But if you needed a perl script to run as a FileFilter then prog will be significantly faster.
#!/usr/local/bin/perl -w use strict; use File::Find; # for recursing a directory tree |
$/ = undef; find( { wanted => \&wanted, no_chdir => 1, }, '.', ); |
sub wanted { return if -d; return unless /\.html$/; |
my $mtime = (stat)[9]; |
my $child = open( FH, '-|' ); die "Failed to fork $!" unless defined $child; exec '/bin/cat', $_ unless $child; |
my $content = <FH>; my $size = length $content; |
print <<EOF; Content-Length: $size Last-Mtime: $mtime Path-Name: $_ |
EOF |
print <FH>; } |
And index with the command:
swish-e -S prog -i ./catfilter.pl -v 1 |
This example will probably not work under Windows due to the '-|' open. A simple piped open may work just as well:
That is, replace:
my $child = open( FH, '-|' ); die "Failed to fork $!" unless defined $child; exec '/bin/cat', $_ unless $child; |
with this:
open( FH, "/bin/cat $_ |" ) or die $!; |
Perl will try to avoid running the command through the shell if meta
characters are not passed to the open. See perldoc -f open
for more information.
[ TOC ]
See the examples in the conf directory.
[ TOC ]
The some of the examples in the prog-bin directory use a module to convert the PDF files into XML. So you must tell Swish-e that you are indexing XML files for the PDF extension.
IndexContents XML .pdf |
[ TOC ]
Both the -S prog
input method and filters use the popen()
system call to run the external program. If your external program is, for
example, a perl script, you have to tell Swish-e to run perl, instead of
the script. Also, you must use the backslash character in the program name
since popen()
runs the command via the shell, which must be a backslash in windows.
For example, you would need to specify the path to perl as (assuming this is where perl is on your system):
IndexDir e:\\perl\\bin\\perl.exe |
Or run a filter like:
FileFilter .foo e:\\perl\\bin\\perl.exe 'myscript.pl "%p"' |
[ TOC ]
Swish-e indexes 8-bit characters only. This is the ISO 8859-1 Latin-1 character set, and includes many non-English letters (and symbols). As long as they are listed in WordCharacters they will be indexed.
Actually, you probably can index any 8-bit character set, as long as you don't mix character sets in the same index.
The TranslateCharacters directive (SWISH-CONFIG) can translate characters while indexing and searching. You may specify the mapping of one character to another character with the TranslateCharacters directive.
TranslateCharacters :ascii7:
is a predefined set of characters that will translate eight bit characters
to ascii7 characters. Using the
:ascii7:
rule will, for example, translate ``Ääç'' to ``aac''. This means: searching
``Çelik'', ``çelik'' or ``celik'' will all match the same word.
Latin-1 when indexing. In cases where a string can not be converted from
UTF-8 to ISO 8859-1 (because it contains non 8859-1 characters), the string
will be sent to Swish-e in UTF-8 encoding. This will results in some words
indexed incorrectly. Setting ParserWarningLevel
to 1 or more will display warnings when UTF-8 to 8859-1 conversion fails.
[ TOC ]
Not really. Swish-e currently has no way to add or remove items from its index. But, Swish-e indexes so quickly that it's often possible to reindex the entire document set when a file needs to be added, modified or removed. If you are spidering a remote site then consider caching documents locally compressed.
Incremental additions can be handled in a couple of ways, depending on your
situation. It's probably easiest to create one main index every night (or
every week), and then create an index of just the new files between main
indexing jobs and use the -f
option to pass both indexes to Swish-e while searching.
You can merge the indexes into one index (instead of using -f), but it's not clear that this has any advantage over searching multiple indexes.
How does one create the incremental index?
One method is by using the -N
switch to pass a file path to Swish-e when indexing. It will only index
files that have a last modification date newer
than the file supplied with the -N
switch.
This option has the disadvantage that Swish-e must process every file in
every directory as if they were going to be indexed (the test for -N
is done last right before indexing of the file contents begin and after all
other tests on the file have been completed) -- all that just to find a few
new files.
Also, if you use the Swish-e index file as the file passed to -N
there may be files that were added after indexing was started, but before
the index file was written. This could result in a file not being added to
the index.
Another option is to maintain a parallel directory tree that contains symlinks pointing to the main files. When a new file is added (or changed) to the main directory tree you create a symlink to the real file in the parallel directory tree. Then just index the symlink directory to generate the incremental index.
This option has the disadvantage that you need to have a central program that creates the new files that can also create the symlinks. But, indexing is quite fast since Swish-e only has to look at the files that need to be indexed. When you run full indexing you simply unlink (delete) all the symlinks.
Both of these methods have issues where files could end up in both indexes, or files being left out of an index. Use of file locks while indexing, and hash lookups during searches can help prevent these problems.
[ TOC ]
It's true that indexing can take up a lot of memory! Swish-e is extremely fast at indexing, but that comes at the cost of memory.
The best answer is install more memory.
Another option is use the -e
switch. This will require less memory, but indexing will take longer as not
all data will be stored in memory while indexing. How much less memory and
how much more time depends on the documents you are indexing, and the
hardware that you are using.
Here's an example of indexing all .html files in /usr/doc on Linux. This
first example is without -e
and used about 84M of memory:
270279 unique words indexed. 23841 files indexed. 177640166 total bytes. Elapsed time: 00:04:45 CPU time: 00:03:19 |
This is with -e
, and used about 26M or memory:
270279 unique words indexed. 23841 files indexed. 177640166 total bytes. Elapsed time: 00:06:43 CPU time: 00:04:12 |
You can also build a number of smaller indexes and then merge together with -M
. Using -e
while merging will save memory.
Finally, if you do build a number of smaller indexes, you can specify more
than one index when searching by using the -f
switch. Sorting large results sets by a property will be slower when
specifying multiple index files while searching.
[ TOC ]
That's a good thing! That expensive CPU is suppose to be busy.
Indexing takes a lot of work -- to make indexing fast much of the work is done in memory which reduces the amount of time Swish-e is waiting on I/O. But, there's two things you can try:
The -e
option will run Swish-e in economy mode, which uses the disk to store data
while indexing. This makes Swish-e run somewhat slower, but also uses less
memory. Since it is writing to disk more often it will be spending more
time waiting on I/O and less time in CPU. Maybe.
The other thing is to simply lower the priority of the job using the
nice(1)
command:
/bin/nice -15 swish-e -c search.conf |
If concerned about searching time, make sure you are using the -b and -m switches to only return a page at a time. If you know that your result sets will be large, and that you wish to return results one page at a time, and that often times many pages of the same query will be requested, you may be smart to request all the documents on the first request, and then cache the results to a temporary file. The perl module File::Cache makes this very simple to accomplish.
[ TOC ]
[ TOC ]
If possible, use the file system method -S fs
of indexing to index documents in you web area of the file system. This
avoids the overhead of spidering a web server and is much faster. (-S fs
is the default method if -S
is not specified).
If this is impossible (the web server is not local, or documents are
dynamically generated), Swish-e provides two methods of spidering. First,
it includes the http method of indexing -S http
. A number of special configuration directives are available that control
spidering (see Directives for the HTTP Access Method Only). A perl helper script (swishspider) is included in the src directory to assist with spidering web servers. There are example
configurations for spidering in the conf directory.
As of Swish-e 2.2, there's a general purpose ``prog'' document source where
a program can feed documents to it for indexing. A number of example
programs can be found in the prog-bin
directory, including a program to spider web servers. The provided
spider.pl program is full-featured and is easily customized.
The advantage of the ``prog'' document source feature over the ``http'' method is that the program is only executed one time, where the swishspider.pl program used in the ``http'' method is executed once for every document read from the web server. The forking of Swish-e and compiling of the perl script can be quite expensive, time-wise.
The other advantage of the spider.pl
program is that it's simple and efficient to add filtering (such as for PDF
or MS Word docs) right into the spider.pl's configuration, and it includes
features such as MD5 checks to prevent duplicate indexing, options to avoid
spidering some files, or index but avoid spidering. And since it's a perl
program there's no limit on the features you can add.
[ TOC ]
Does the file swishspider exist where the error message displays? If not, either set the configuration option SpiderDirectory to point to the directory where the swishspider program is found, or place the swishspider program in the current directory when running swish-e.
If you are running Windows, make sure ``perl'' is in your path. Try typing perl from a command prompt.
If you not running windows, make sure that the shebang line (the first line of the swishspider program that starts with #!) points to the correct location of perl. Typically this will be /usr/bin/perl or /usr/local/bin/perl. Also, make sure that you have execute and read permissions on swishspider.
The swishspider perl script is only used with the -S http method of indexing.
[ TOC ]
The spider.pl
program has a default limit of 5MB file size. This can be changed with the max_size
parameter setting. See perldoc
spider.pl
for more information.
[ TOC ]
The spider.pl program has a number of debugging switches and can be quite verbose in
telling you what's happening, and why. See perldoc
spider.pl
for instructions.
[ TOC ]
Swish cannot follow links generated by Javascript, as they are generated by the browser and are not part of the document.
[ TOC ]
You can either merge -M
two indexes into a single index, or use -f
to specify more than one index while searching.
You will have better results with the -f
method.
[ TOC ]
[ TOC ]
If you can identify ``parts'' of your index by the path name you have two options.
The first options is by indexing the document path. Add this to your configuration:
MetaNames swishdocpath |
Now you can search for words or phrases in the path name:
swish-e -w 'foo AND swishdocpath=(sales)' |
So that will only find documents with the word ``foo'' and where the file's path contains ``sales''. That might not works as well as you like, though, as both of these paths will match:
/web/sales/products/index.html /web/accounting/private/sales_we_messed_up.html |
This can be solved by searching with a phrase (assuming ``/'' is not a WordCharacter):
swish-e -w 'foo AND swishdocpath=("/web/sales/")' swish-e -w 'foo AND swishdocpath=("web sales")' (same thing) |
The second option is a bit more powerful. With the ExtractPath directive you can use a regular expression to extract out a sub-set of the path and save it as a separate meta name:
MetaNames department ExtractPath department regex !^/web/([^/]+).+$!$1/ |
Which says match a path that starts with ``/web/'' and extract out everything after that up to, but not including the next ``/'' and save it in variable $1, and then match everything from the ``/'' onward. Then replace the entire matches string with $1. And that gets indexed as meta name ``department''.
Now you can search like:
swish-e -w 'foo AND department=sales' |
and be sure that you will only match the documents in the /www/sales/* path. Note that you can map completely different areas of your file system to the same metaname:
# flag the marketing specific pages ExtractPath department regex !^/web/(marketing|sales)/.+$!marketing/ ExtractPath department regex !^/internal/marketing/.+$!marketing/ # flag the technical departments pages ExtractPath department regex !^/web/(tech|bugs)/.+$!tech/ |
Finally, if you have something more complicated, use -S prog
and write a perl program or use a filter to set a meta tag when processing
each file.
[ TOC ]
Use the -t
switch.
[ TOC ]
Or, I can't search with meta names, all the names are indexed as "plain".
Check in the config.h file if #define INDEXTAGS is set to 1. If it is, change it to 0, recompile, and index again. When INDEXTAGS is 1, ALL the tags are indexed as plain text, that is you index ``title'', ``h1'', and so on, AND they loose their indexing meaning. If INDEXTAGS is set to 0, you will still index meta tags and comments, unless you have indicated otherwise in the user config file with the IndexComments directive.
Also, check for the UndefinedMetaTags setting in your configuration file.
[ TOC ]
Debugging CGI scripts are beyond the scope of this document. Internal
Server Error basically means ``check the web server's log for an error
message'', as it can mean a bad shebang (#!) line, a missing perl module,
FTP transfer error, or simply an error in the program. The CGI script swish.cgi in the example directory contains some debugging suggestions. Type perldoc swish.cgi
for information.
There are also many, many CGI FAQs available on the Internet. A quick web search should offer help. As a last resort you might ask your webadmin for help...
[ TOC ]
Your web server is not configured to run the program as a CGI script. This
problem is described in perldoc swish.cgi
.
[ TOC ]
Short answer:
Use the supplied swish.cgi script located in the examples directory.
Long answer:
Swish-e can't because it doesn't have access to the source documents when returning results, of course. But a front-end program of your creation can highlight terms. Your program can open up the source documents and then use regular expressions to replace search terms with highlighted or bolded words.
But, that will fail with all but the most simple source documents. For HTML documents, for example, you must parse the document into words and tags (and comments). A word you wish to highlight may span multiple HTML tags, or be a word in a URL and you wish to highlight the entire link text.
Perl modules such as HTML::Parser and XML::Parser make word extraction possible. Next, you need to consider that Swish-e uses settings such as WordCharacters, BeginCharacters, EndCharacters, IgnoreFirstChar, and IgnoreLast, char to define a ``word''. That is, you can't consider that a string of characters with white space on each side is a word.
Then things like TranslateCharacters, and HTML Entities may transform a source word into something else, as far as Swish-e is concerned. Finally, searches can be limited by metanames, so you may need to limit your highlighting to only parts of the source document. Throw phrase searches and stopwords into the equation and you can see that it's not a trivial problem to solve.
All hope is not lost, thought, as Swish-e does provide some help. Using the -H
option it will return in the headers the current index (or indexes)
settings for WordCharacters (and others) required to parse your source
documents as it parses them during indexing, and will return a ``Parsed
Words:'' header that will show how it parsed the query internally. If you
use fuzzy indexing (word stemming, soundex, or metaphone) then you will
also need to stem each word in your document before comparing with the
``Parsed Words:'' returned by Swish-e. The Swish-e stemming code is
available either by using the Swish-e Perl module or C library (included
with the swish-e distribution), or by using the SWISH::Stemmer module
available on CPAN. Also on CPAN is the module Text::DoubleMetaphone.
[ TOC ]
No. Filters (FileFilter or via ``prog'' method) are only used for building the search index database. During search requests there will be no filter calls.
[ TOC ]
The Swish-e discussion list is the place to go. http://swish-e.org/. Please do not email developers directly. The list is the best place to ask questions.
Before you post please read QUESTIONS AND TROUBLESHOOTING located in the INSTALL page. You should also search the Swish-e discussion list archive which can be found on the swish-e web site.
In short, be sure to include in the following when asking for help.
[ TOC ]
$Id: SWISH-FAQ.pod,v 1.26 2002/09/11 00:54:09 whmoseley Exp $
. [ TOC ]