                         README file for Sslurp! 1.4
                         ===========================

Contents:        Description
                 How to install
                 Where to find the latest version
                 Revision History
                 Contacting the author


                                   Description
                                   ===========

Sslurp! can retrieve Web pages from a HTTP (WWW) server. It can be configured
to follow all hyperlinks on the page that lead to other pages on the same
server. Images on the pages can be retrieved as well. All pages are stored on
disk and can be viewed later using your web browser.

Sslurp! can make use of a proxy HTTP server, speeding up the whole procedure.
Sslurp! requires at least one HPFS partition!

Sslurp! is Freeware, i.e. you can use and distribute it for free. However,
you may not sell or rent it! Sslurp! is still copyrighted software.


                                 How to install
                                 ==============

Installation is simple. First, unpack the ZIP file (hey, you already did it!).
Second, copy the files to some directory. Start the program. Select
"Setup/Options" from the menu to configure the software. Press F1 to get help.

If you upgrade from a beta version and you like to keep your settings, you
must rename SPIDER.INI or WSUCK.INI to SSLURP.INI!


                        Where to find the latest version
                        ================================

New versions are uploaded to these locations:

ftp://ftp.cdrom.com/pub/os2/incoming (will be moved to /pub/os2/internet,
 so check both directories)

Mirrors of above server, e.g. ftp.leo.org in Europe.

ftp://ftp.wilmington.net/bmtmicro


                                   To-Do list
                                   ==========

This is a list of ideas for upcoming versions:

- Create a timeout handler. The TCP/IP-internal timeout feature is not
  implemented under OS/2 :-(
- Convert absolute links to relative links (now I think I should solve
  this in a different way; stay tuned...)



                             Revision History (> 1.0)
                             ========================

Version 1.4:

  # New user interface. Now a list of processed and pending URLs
    is displayed. The status information is updated when the URL is processed.
  * URLs with trailing "/" have matched any file name extension. Now these
    URLs only match with "html".
  # Incomplete downloads are no longer recorded as "successful", i.e. they're
    re-loaded next time.


Version 1.3:

  # Drop down list is closed when "Start" button is pressed.
  # Pressing ENTER in the drop down list is equivalent to pressing the
    "Start" button.
  * Colons in URLs are converted.
  # Options are displayed in debug log messages


Version 1.2:

  * Problems with downloading Applets.
  * Extremely long HTML tags are now skipped. They may have caused crashes.
  # Leading whitespace in URLs is now skipped.
  * Some characters in URLs were unnecessarily converted for local storage.
  * Exclusion by extension didn't work correctly.


Version 1.1:

  * APPLET tag without CODEBASE attribute was not processed.
  # Option "Max link level" does no longer apply to images and applets.
  + Possibility to exclude a set of link extensions from being downloaded.
  + "Use proxy" option is available as command line switch



                              Contacting the author
                              =====================

SSlurp! was written by Michael Hohner. You can reach him at

  Internet:   miho@n-online.de (new!)
  Fidonet:    2:2490/2520.17


27. February 1998, Michael Hohner
