README for IBM Network Dispatcher v3.6 
-------------------------------------------------------------------

This README describes new features available for Network Dispatcher
===================================================================

Network Dispatcher v3.6 includes the following new features:
      Java 1.3 Support
      Gigabit Ethernet Support
      Multi-port Ethernet Support
      Solaris Version 8 (32-bit mode) Support
      Red Hat Linux v6.2 Support 
      Collocated Server Support for Red Hat Linux
      Capacity Utilization and Bandwidth Rules
      Server Evaluation Option
      GRE Support
      Advisor Fast-Failure Detection
      Quiesce Enhancement for Sticky Connections
      Enhanced ISS Load Balancing of Network Dispatchers
      SSL Proxy-to-Server Support
      Usability Enhancements


         
Java 1.3 Support
----------------

Network Dispatcher requires Java 1.3 on all supported platforms: 
AIX, Red Hat Linux, Solaris, Windows NT, and Windows 2000. There
will no longer be support for Java 1.1.x. 

The following are the Java 1.3 requirements by platform:
 
AIX: 
IMPORTANT: AIX 4.3.3.10 plus apars is required to support Java 1.3.
Refer to the README for the IBM AIX Developer Kit for a list of 
required AIX apars. 
        
IBM AIX Developer Kit, Java 2 Technology Edition, Version 1.3.0 for
the Java Runtime Environment
-Note: You must download both the Developer Kit installable package 
and the Runtime Environment installable package prior to running
the InstallShield program. 

Red Hat Linux:        
IBM Runtime Environment for Linux, Java 2 Technology Edition, 
Version 1.3.0
-Assign to your JAVA_HOME environment variable the directory where
Java is installed, for example: /opt/IBMJava2-13/jre
-Add to your PATH environment variable: $JAVA_HOME/bin:$PATH

Solaris:      
Java 2 JRE, Standard Edition, Version 1.3.0
-Add to your PATH environment variable: /usr/j2se/bin

Windows NT and Windows 2000: 
IBM Cross Platform Technologies for Windows v2.0 (SDK 1.3) 
-Note: You must download both the Developer Kit installable package 
and the Runtime Environment installable package prior to running the 
InstallShield program.
-Note: For ISS component only, if Network Dispatcher is unable to find
where Java 1.3 is installed from the Registry, then define an 
environment variable IBMND_JRE_LOCATION which should be set to the 
fully qualified path for the jvm.dll file. For example:
  IBMND_JRE_LOCATION="c:\Progra~1\IBM\Java13\jre\bin\classic\jvm.dll"

 
To configure WTE for Java 1.3 (if using CBR with HTTP or SSL), the 
following updates are required by platform. (For general information
on configuring WTE for CBR refer to Chapter 7 of the "IBM Network 
Dispatcher User's Guide" Version 3.0 for Multiplatforms):

AIX: 
-Add to your LIBPATH environment variable: 
/usr/java130/jre/bin:/usr/java130/jre/bin/classic

Red Hat Linux:
-Assign to your JAVA_HOME environment variable the directory where
Java is installed, for example: /opt/IBMJava2-13/jre
-Add to your PATH environment variable: $JAVA_HOME/bin:$PATH
-Add to your LD_LIBRARY_PATH environment variable: 
/opt/IBMJava2-13/jre/bin;/opt/IBMJava2-13/jre/bin/classic
-In the CBR configuration file (ibmproxy.conf), replace the
directory to Java (in the class path) to the following: 
/opt/IBMJava2-13/jre/bin/classic


Solaris:
-Add to your LD_LIBRARY_PATH environment variable:
/usr/j2se/jre/lib/sparc

Windows NT and Windows 2000: 
Add to your PATH environment variable:
c:\Progra~1\IBM\Java13\jre\bin;c:\Progra~1\IBM\Java13\jre\bin\classic

Compile command for Custom advisors using Java 1.3:

Custom advisors are written in Java language.  These files are 
referenced during compilation:
  - the custom advisor file
  - the base classes file, ibmnd.jar, found in the dispatcher\lib
    directory where Network Dispatcher is installed.
Your classpath must point to both the custom advisor file and the  
base classes file during the compile.
For Windows, a compile command might look like this:
  javac -classpath <install_dir>\nd\dispatcher\lib\ibmnd.jar 
  ADV_fred.java
where:
  - your advisor file is named ADV_fred.java
  - your advisor file is stored in the current directory
The output for the compilation is a class file, for example:
  ADV_fred.class
Before starting the advisor, copy the class file to the 
dispatcher\lib or cbr\lib directory where Network Dispatcher is
installed.
       

Gigabit Ethernet Support
------------------------

Network Dispatcher now supports 1 Gb (gigabit) Ethernet NICs (Network
Interface Cards) on all supported platforms: AIX, Red Hat Linux, 
Solaris, Windows NT, and Windows 2000.

Note for Solaris:
Hardware requirement - 
   Ultra 60 servers support 1 GB Ethernet NICs. (SPARC does not 
   support 1 Gb Ethernet NICs.)
You need to edit the /opt/nd/dispatcher/ibmnd.conf file as follows -
   substitute "ge -1 0 ibmnd" (1 Gb Ethernet) to replace 
   "hme -1 0 ibmnd" (default for 100 Mb Ethernet).


Multi-port Ethernet Support
---------------------------

Network Dispatcher now supports multi-port Ethernet NICs on all
platforms: AIX, Red Hat Linux, Solaris, Windows NT, and 
Windows 2000.

Note 1: The implementation of the multi-port NICs vary from vendor
to vendor.  Therefore, given that a subset were tested, support
for some multi-port NICs may be limited.

Note 2: The link-level fault tolerance feature of multi-port NICs
is not supported.

Note 3: The trunking feature of multi-port NICs is not supported.

For Solaris only, you must edit the /opt/nd/dispatcher/ibmnd.conf
file.  In the ibmnd.conf file, substitute "qfe -1 0 ibmnd"
(multi-port Ethernet) to replace "hme -1 0 ibmnd" (default for
100 Mb Ethernet).

For Windows 2000 only, when using the Adaptec Quartet64 Fast
Ethernet card, you must disable the "Transmit Checksum
Offload" option for each multi-port adapter. To disable this
option, do the following:
1)Start->Settings->Network and Dial-up Connections
2)Right click the quad-port adapter
3)Select properties
4)Under General tab, click Configure
5)Under Advanced tab, select Transmit Checksum Offload and then
  select Disable
6)Click OK

For Windows NT only, when using a multi-port adapter, it may be
necessary to install Network Dispatcher first, prior to installing
the mult-port adapter device driver.  Failure to do so, may result
in a Windows NT blue screen and kernel memory dump. If blue screen
occurs, contact your operating system vendor for analysis of the
memory dump.


Solaris Version 8 (32-bit mode) Support
---------------------------------------

In addition to supporting Solaris Version 2.6 and Solaris Version 7 
(32-bit mode), Network Dispatcher now also supports Solaris Version 8 
(32-bit mode).  

For Solaris 8, there is a new ifconfig command to add an alias to
the loopback device:
     ifconfig lo0:1  plumb <cluster_address> netmask <netmask> up 

To remove an alias from the loopback device:
     ifconfig lo0:1 unplumb

For Solaris requirements, refer to Chapter 2 of the "IBM 
Network Dispatcher User's Guide" Version 3.0 for Multiplatforms.


Red Hat Linux v6.2 Support
--------------------------

In addition to supporting Red Hat Linux v6.1 (Linux kernel version
2.2.12-20, as well as any additional fixes to 2.2.12), Network 
Dispatcher now also supports Red Hat Linux v6.2 (Linux kernel 
version 2.2.16-3, as well as any additional fixes to 2.2.16).

For Red Hat Linux v6.2, when configuring a collocated server, you
must issue the following echo commands:
  echo 1 > /proc/sys/net/ipv4/conf/lo/hidden
  echo 1 > /proc/sys/net/ipv4/conf/all/hidden

Note: Red Hat Linux v6.2 (Linux kernel version 2.2.16-3) does
NOT require a patch in order to support both collocation and high
availability at the same time.

For Red Hat Linux requirements, refer to Chapter 2 of the "IBM 
Network Dispatcher User's Guide" Version 3.0 for Multiplatforms.


Collocated Server Support for Red Hat Linux
-------------------------------------------

Collocated server support for Red Hat Linux v6.1 is available for
the Dispatcher component. Collocation refers to installing Network 
Dispatcher on a server machine that it is also load balancing. 
With this release, Network Dispatcher now fully supports collocation
and high availability at the same time for Red Hat Linux v6.1 
(Linux kernel version 2.2.12-20).
   
In earlier releases, Network Dispatcher could support collocation
for Red Hat Linux but could not support both collocation and a 
high availability configuration at the same time. In order to 
configure both collocation and high availability at the same time,
you must install a Linux kernel patch. 

For information to install the patch, see "IBM Network Dispatcher 
User's Guide" Version 3.0, Chapter 5, section "Installing the Linux 
kernel patch (for aliasing the loopback adapter)."  However, when
following these instructions, skip the step to alias the loopback
adapter. You should add the ifconfig instruction to alias the 
loopback adapter in the goStandby high-availability script file 
that gets executed when a Dispatcher goes into standby state.

Note: Red Hat Linux v6.2 (Linux kernel version 2.2.16-3) does
NOT require a patch in order to support both collocation and high
availability at the same time.
		 

Capacity Utilization and Bandwidth Rules 
----------------------------------------

Capacity utilization and bandwidth rules are available for the 
Dispatcher component. Using the capacity utilization feature, 
Dispatcher measures the amount of data delivered by each of its 
servers. Dispatcher tracks capacity at the server, rule, port, 
cluster, and executor levels. For each of these levels, there is a 
new byte counter value: kilobytes transferred per second. The rate 
value (kilobytes transferred per second) is calculated over a 60 
second interval. You can view these capacity values from the GUI or 
from the output of a command line report.

Dispatcher allows you to allocate a specified bandwidth to sets of 
servers within your configuration using the "reserved bandwidth" rule.
When traffic exceeds the reserved bandwidth threshold, you can do 
either of the following:
-- Send the traffic to another server, using an always true rule, that
responds with a "site busy" type response.  
-- Or, share a specified amount of bandwidth at the cluster level or
executor level using the "shared bandwidth" rule. And, when the 
overall shared bandwidth threshold is approached, you can then direct
traffic to another server, using an always true rule, that responds 
with a "site busy" type response.

By using the shared bandwidth rule in conjunction with the reserved 
bandwidth rule, as described above, you can provide preferred clients
with increased server access and optimal performance for their 
transactions. For example, using the shared bandwidth rule to recruit 
unused bandwidth, you can allow online trading customers executing 
trades on server clusters to receive greater access than customers 
using other server clusters for investment research.    

Note the following to determine whether bandwidth rules can help you 
manage the volume of response traffic that flows from servers to 
clients: 
-- Bandwidth rules can help to manage the volume of response traffic
that flows from a set of server machines, based upon the client 
requests, that flow through Network Dispatcher.  If some client 
traffic goes directly to the server machines and is unseen by Network 
Dispatcher, then results may be unpredictable.
-- Bandwidth rules can help to manage the volume of response traffic 
flowing on a link from a set of server machines to the network when
all servers use the same link to the network.  If servers use 
different links, or multiple links, to access the network, then 
results for each individual link may be unpredictable.
-- Bandwidth rules are helpful only when all servers are local to the 
Network Dispatcher machine.  If some servers are remote, having 
different paths to the network, then results may be unpredictable.

How to configure the two new rules associated with capacity 
utilization (the reserved bandwidth rule and the shared bandwidth 
rule):

Reserved Bandwidth rule --
The reserved bandwidth rule allows you to load-balance based on the 
number of kilobytes per second being delivered by a set of servers. By 
setting a threshold (allocating a specified bandwidth range) for each 
set of servers throughout the configuration, you can control and 
guarantee the amount of bandwidth being used by each cluster-port 
combination. An example of the new rule type, reservedbandwidth, for 
the ndcontrol rule command follows:
   ndcontrol rule [add] <cluster>:<port>:<rule> type reservedbandwidth 
     beginrange <low> endrange <high>  
 
(The low beginrange is an integer that defaults to 0, and the high 
endrange is an integer  value that defaults to 2 to the 32nd power 
minus 1.)

Shared Bandwidth rule --
If the amount of data transferred exceeds the limit for the reserved 
bandwidth rule, the shared bandwidth rule provides you the ability to 
recruit unused bandwidth available at the site. You can configure this
rule to share bandwidth at either the cluster or the executor level. 
Sharing bandwidth at the cluster level allows a port (or ports) to 
share a maximum amount of bandwidth across several ports 
(applications/ protocols) within the same cluster. Sharing bandwidth 
at the executor level allows a cluster (or clusters) within the entire
Dispatcher configuration to share a maximum amount of bandwidth.

Prior to configuring the shared bandwidth rule, you must specify the 
maximum amount of bandwidth (kilobytes per second) that can be shared 
at the executor or cluster level using ndcontrol executor or ndcontrol
cluster command with the sharedbandwidth option. The following are 
examples of the command syntax: 
    ndcontrol executor [set] sharedbandwidth <value>
    ndcontrol cluster [add | set] <cluster> sharedbandwidth <value> 
   
(The value for sharedbandwidth is an integer value. The default is 
zero. If the value is zero, then bandwidth cannot be shared.)

Note: You should specify a maximum shared bandwidth value that does
not exceed the total bandwidth (total server capacity) available. 

The following are examples of command syntax for the new rule type, 
sharedbandwidth:
    ndcontrol rule [add] <cluster>:<port>:<rule> type sharedbandwidth 
         sharelevel <value>
    ndcontrol rule [set] <cluster>:<port>:<rule> sharelevel <value> 

(The value for sharelevel is either executor or cluster. Sharelevel is
a required parameter on the sharebandwidth rule.)


Server Evaluation Option 
------------------------

The server evaluation option is available for Dispatcher component. 
In addition to the bandwidth rules, there is a new rule option 
"evaluate" on the ndcontrol rule command. Use the "evaluate" keyword
to choose to evaluate the rule's condition across all the servers 
within the port or to evaluate the rule's condition across just the 
servers within the rule. 
 
Note: In earlier versions of Network Dispatcher, you could only 
measure each rule's condition across all servers within the port.

The option to measure the rule's condition across the servers within 
the rule allows you to configure two rules with the following 
characteristics:  The first rule that gets evaluated contains all the 
servers maintaining the content, and the evaluate option is set to 
"rule" (evaluate rule's condition across the servers within the rule).
The second rule is an always true rule that contains a single server 
that responds with a "site busy" type response.  The result is that 
when traffic exceeds the threshold of the servers within the first
rule, traffic will be sent to the "site busy" server within the 
second rule.  When traffic falls below the threshold of the servers 
within the first rule, new traffic continues once again to the servers
in the first rule. 

On the other hand, if you set the evaluate option to "port" for the 
first rule (evaluate rule's condition across all servers within the 
port),  when traffic exceeds the threshold of that rule, traffic is 
sent to the "site busy" server associated to the second rule.  Since 
the first rule measures all server traffic (including the "site busy" 
server) within the port to determine whether the traffic exceeds the 
threshold, as congestion decreases for the servers associated to the 
first rule, an unintentional result may occur where traffic continues 
to the "site busy" server because traffic within the port still 
exceeds the threshold of the first rule.  

The server evaluation option is only valid for the following rules 
that make their decisions based upon the characteristics of the 
servers: total connections (per second) rule, active connections 
rule, and reserved bandwidth rule. The following are examples of the
command syntax for the server evaluation option (evaluate):
    ndcontrol  rule [add] <cluster>:<port>:<rule> type 
         reservedbandwidth evaluate <value> 
    ndcontrol  rule [set] <cluster>:<port>:<rule> evaluate <value> 
 
(The value for evaluate is either port or rule. The default is port.) 


GRE Support
-----------

This feature is available for the Dispatcher component.  Generic 
Routing Encapsulation (GRE) is an internet protocol specified in 
RFC 1701 and RFC 1702.  Using GRE support, Network Dispatcher 
encapsulates client IP packets inside IP/GRE packets and forwards 
them to server platforms such as OS/390 that support GRE.  GRE 
support allows Network Dispatcher to load balance packets to multiple 
server addresses associated with one MAC address. Earlier releases 
of Network Dispatcher required one-to-one correspondence between MAC 
address and server address.  

Network Dispatcher implements GRE as part of its WAND (Wide Area 
Network Dispatcher) feature. This allows Network Dispatcher to 
provide wide area load balancing directly to any server systems that 
can unwrap the GRE packets.  Network Dispatcher does not need to be 
installed at the remote site if the remote servers support the
encapsulated GRE packets.  Network Dispatcher encapsulates WAND 
packets with the GRE key field set to decimal value 3735928559.

For example, to add an OS/390 machine that supports GRE, define the 
OS/390 server within your Network Dispatcher configuration as if you 
are defining a WAND server in the cluster:port:server hierarchy. See 
"Configure wide area Dispatcher support" section in Chapter 8 of the 
"IBM Network Dispatcher User's Guide" Version 3.0 for Multiplatforms.
No new ndcontrol commands are required to enable GRE support. 


Advisor Fast-Failure Detection
------------------------------

This feature is available for the Dispatcher and CBR components. With 
this enhancement, you have the ability to set the advisor's timeout 
values at which it detects a server has failed. The new failed-server 
timeout values (for connecttimeout and receivetimeout keywords) 
determine how long an advisor waits before reporting that either a 
connect or receive has failed. 

To obtain the fastest failed-server detection, set the new advisor 
timeouts to the smallest value (one second), and set the advisor and 
manager interval time to the smallest value (one second).

Note: If your environment experiences a moderate to high volume of 
traffic such that server response time increases, be careful not to 
set the timeout values too small, or the advisor may prematurely mark
a busy server as failed.

The timeout values can be set from either the GUI or the command line. 
The following is the command syntax for connecttimeout and 
receivetimeout:
   ndcontrol advisor connecttimeout <advisor> <port> 
       <timeoutseconds>
   ndcontrol advisor receivetimeout <advisor> <port> 
       <timeoutseconds>  

For example: 
   ndcontrol advisor connecttimeout http 80 1
   ndcontrol advisor receivetimeout http 80 1

(Valid values for timeoutseconds are integers greater than zero. The 
default for timeoutseconds is 3 times the value specified for the 
advisor interval time.)


Quiesce Enhancement for Sticky Connections
------------------------------------------

This feature is available for the Dispatcher and Content Based Routing
(CBR) components. To remove a server from the Network Dispatcher 
configuration for any reason (updates, upgrades, service, etc.), you 
can use the ndcontrol manager quiesce command that has the effect of 
allowing existing connections to complete (without being severed) 
while disallowing all new connections to the quiesced server. 

The quiesce enhancement extends the server quiesce function to 
recognize existing connections that have the affinity/sticky feature.
For example, if you quiesce a server, and an existing connection has
affinity to the server, then Network Dispatcher can forward subsequent
new connections from that client to the quiesced server as long as the
subsequent new connections arrive before the stickytime expires. This
enhancement provides a graceful, less abrupt, handling of sticky 
connections when quiescing servers. For instance, you can "gracefully"
quiesce a server and then wait for the time where there is the least 
amount of traffic (perhaps early morning) to completely remove the 
server from the configuration.

A new optional keyword "now" has been added to the ndcontrol manager 
quiesce command:
    ndcontrol manager quiesce <address> now

Only use quiesce "now" if you have stickytime set, and you want new
connections sent to another server before stickytime expires.

The now option determines how sticky connections will be handled as 
follows:

If you don't specify "now," you allow existing connections to complete
as well as forward subsequent new connections to the quiesced server 
from those clients with existing connections that are designated as 
sticky, as long as the quiesced server receives the new request before
stickytime expires. (However, if you have not enabled the sticky/ 
affinity feature, the quiesced server cannot receive any new 
connections.) This is the more graceful way to quiesce servers.

By specifying "now," you quiesce the server so it allows existing 
connections to complete but disallows all new connections including
subsequent new connections from those clients with existing 
connections that are designated as sticky.  This is the more abrupt
way to quiesce servers, which was the only way it was handled in 
earlier releases.

You can view the quiesce status of the server from the GUI.  Or, you 
can view it from the command line by issuing ndcontrol server status 
command.


Enhanced ISS Load Balancing of Network Dispatchers
--------------------------------------------------

This feature is available for Dispatcher and ISS component. The ISS 
enhancement provides improved ISS DNS load balancing of Network 
Dispatchers in a two-tiered ISS configuration where an ISS DNS monitor
determines how to load balance across a lower tier of Network 
Dispatchers. Previously for this type of load balancing (across a 
wide area network),  Network Dispatcher ran script files to furnish 
the ISS monitor with measurements based on the CPU load and memory 
of the Network Dispatcher machine, which is more a measure of a 
particular machine rather than the pool of servers behind it. 

With this enhancement, Dispatcher provides a "self" advisor that 
collects load status information on backend servers.  The self advisor
specifically measures the connections per second rate on backend 
servers of the Network Dispatcher at the executor level.  The self 
advisor writes the results to the ndloadstat file. 

Network Dispatcher also provides an external metric called ndload, 
which you add to the ISS configuration file when defining a 
ResourceType.  The ISS agent on each Network Dispatcher machine runs 
its configuration that calls the external metric ndload. The ndload 
script extracts a string from the ndloadstat file and returns it to
the ISS agent.  Subsequently, each of the ISS agents (from each of the
Network Dispatchers) returns the load status value to the ISS monitor 
for use in determining which Network Dispatcher to forward client 
requests.

The ndload executable resides in the /dispatcher directory for Network
Dispatcher.  For example, you could write the executable filename 
including path name in the ISS configuration file as follows:

ResourceType NTScript
Metric External "C:\Progra~1\IBM\nd\dispatcher\ndload.exe"

For this feature, ndcontrol report command displays a new statistic, 
connections per second rate, for the executor and cluster.  You can 
view connections per second rate from the GUI, as well. 


SSL Proxy-to-Server Support
---------------------------

This feature is available for CBR with WTE (Caching Proxy).  Network 
Dispatcher extends CBR to support SSL from the proxy to the server, 
which will allow for complete SSL connections from the client to the 
server.  In previous releases, CBR supported SSL from the 
client-to-proxy side, but not from the proxy-to-server side.  CBR 
would receive an SSL transmission from the client and then decrypt 
the SSL request before proxying the request to an HTTP server. With
SSL proxy-to-server support, you can define an SSL server in the CBR 
configuration to receive the SSL request from the client. This 
feature provides you the ability to maintain a secure site, using CBR 
to load balance across secure (SSL) servers.

CBR will continue to support client-to-proxy in SSL and 
proxy-to-server in HTTP. To support this function, there is a new 
optional keyword "serverport" on the cbrcontrol port command.  Use 
this keyword when you need to indicate that the port on the server is
different from the incoming port from the client.  You can view 
serverport value from the Port Status GUI panel ("Server(s) listening
on port" field) or from the cbrcontrol port status command. An example
of the cbrcontrol port command for serverport follows, where the 
client's port is 443 and the server port is 80:
       cbrcontrol port [add | set] <cluster>:443 serverport 80            

(The port number for serverport can be any positive integer value. The
default is the port number value of the incoming port from the 
client.) 

Since CBR must be able to advise on an HTTP request for a server 
configured on SSL port 443, a special advisor "ssl2http" is provided.
This advisor starts on port 443 (incoming port from the client) and 
advises on the server(s) configured for that port.  If there are two 
clusters configured and each has port 443 configured, each with a 
different serverport, then a single instance of the advisor should 
open the appropriate port accordingly. The following is an example 
of this configuration:
      Executor
          Cluster 1
              Port:443 serverport 80
                 Server 1
                 Server 2
           Cluster 2
               Port:443 serverport 8080
                  Server 3
                  Server 4
            Manager
                  Advisor ssl2http 443


Usability Enhancements
----------------------

> Display ndserver log status:

This feature is available for the Dispatcher and CBR components. A new
keyword (logstatus) has been added to the ndcontrol/ cbrcontrol set 
command to display the server log settings (logging level and log 
size). An example of this command follows:
     ndcontrol set logstatus


> Display cluster configuration status:

This feature is available for the Dispatcher component. The ndcontrol 
cluster status command returns an additional value: cluster 
configuration status. Cluster configuration status has information 
on whether the cluster is aliased (configured) on the NIC.  
Possible status results are: configured, unconfigured, or unavailable. 
A return status of unavailable results if the cluster did not respond.
This information is also available from the GUI, under the Current 
Statistics tab within the Cluster panel. 


> Port-specific manager proportion enhancement:

This feature is available for Dispatcher and CBR components. If the 
port-specific manager proportion is zero, when adding an advisor, the 
manager will subtract 1 each from the active and total connections 
proportions and set the port-specific proportion to 2.  If the system 
metric proportion  is set to 100, when adding an advisor, the manager 
will subtract 2 from the system metric proportion, and add 2 to the 
port-specific proportion. 

Note: Each of the manager proportion values -- active connections, new
connections, port, system metrics -- is expressed as a percentage of 
the total and therefore must always total 100.


> New optional parameter for the ndload ISS external metric:

This feature is available for the ISS component on Windows NT and
Windows 2000.  In the ISS configuration file, you can now specify the 
directory for the ndloadstat file as a parameter to the ndload ISS
external metric.  You only need to specify the directory if you start 
ISS from some other directory than the ISS directory 
(progra~1/ibm/nd/iss). 

An example of specifying the directory for the ndloadstat file on 
the external metric follows:
 Metric EXTERNAL 
   \progra~1\ibm\nd\dispatcher\ndload  \\progra~1\\ibm\\nd\\dispatcher

Note: The first directory listed is for ndload. The second directory
listed is to the ndloadstat file.  The double back-slashes are 
required as separators for the directory to the ndloadstat file.  
Also, the fully qualified path is necessary up to, but not including
the ndloadstat file. 


> For Solaris, ibmnd configuration file enhancement

This feature is available for the Dispatcher component only on the
Solaris platform.  When removing an installed version of Network
Dispatcher, the ibmnd.conf file will be renamed ibmnd.conf.bak to
preserve configuration information.  After re-installing Network
Dispatcher, you can rename ibmnd.conf.bak to ibmnd.conf. 

Note: The ibmnd.conf file resides in the /opt/nd/dispatcher directory.
For more information on the ibmnd.conf file, see "Setting up the 
Dispatcher machine" section in Chapter 5 of the "IBM Network 
Dispatcher User's Guide" Version 3.0 for Multiplatforms.