ClusterPack V2.4 Tutorial
Page 11
... network (LAN) and/or high performance interconnect technologies based on HP Integrity servers with interconnection options It also has the following key benefits: z horizontally scalable by adding more nodes z vertically scalable by using larger SMP nodes z fault-isolation - A compute cluster has to support both time-to supercomputers. Compute clusters are connected through system software and networking technologies. The primary driver for a growing number...
... network (LAN) and/or high performance interconnect technologies based on HP Integrity servers with interconnection options It also has the following key benefits: z horizontally scalable by adding more nodes z vertically scalable by using larger SMP nodes z fault-isolation - A compute cluster has to support both time-to supercomputers. Compute clusters are connected through system software and networking technologies. The primary driver for a growing number...
ClusterPack V2.4 Tutorial
Page 12
... access to provide lower server that provide system computing resource and storage capability. z Interconnect switch - provides high speed connectivity between Compute Nodes for applications. z Management Processor (MP) - controls the system console, reset and power management functions of Gigabit Ethernet or Infiniband. z Console LAN - network for system administrators and end-users. using the Management Processor LAN). z Network Attached Storage (NAS) - Compute Nodes Compute Nodes in a cluster are : z Head Node - The HP Integrity rx2600 server, powered by lowering memory...
... access to provide lower server that provide system computing resource and storage capability. z Interconnect switch - provides high speed connectivity between Compute Nodes for applications. z Management Processor (MP) - controls the system console, reset and power management functions of Gigabit Ethernet or Infiniband. z Console LAN - network for system administrators and end-users. using the Management Processor LAN). z Network Attached Storage (NAS) - Compute Nodes Compute Nodes in a cluster are : z Head Node - The HP Integrity rx2600 server, powered by lowering memory...
ClusterPack V2.4 Tutorial
Page 13
... as group operations and role-based management, enable customers to separate the system management traffic from application message passing and file serving traffics. A cluster LAN is an integrated solution that offers the following key features: Installation and configuration z automated cluster setup z network services setup (NFS, NTP, NIS, Ignite-UX) z remote power-on HP-UX 11i The ClusterPack cluster can be managed and used in large-scale data centers for...
... as group operations and role-based management, enable customers to separate the system management traffic from application message passing and file serving traffics. A cluster LAN is an integrated solution that offers the following key features: Installation and configuration z automated cluster setup z network services setup (NFS, NTP, NIS, Ignite-UX) z remote power-on HP-UX 11i The ClusterPack cluster can be managed and used in large-scale data centers for...
ClusterPack V2.4 Tutorial
Page 14
... for both system administrators and end users. NAS 8000 NAS 8000 High Availability Cluster was designed to the cluster, monitoring jobs currently running on a Management Server, and client agents that is useful for more accessible data, and more reliable storage, is presented. The Related Documents gives the location of additional information for the initial setup and continuing operation of each tool, a basic functional...
... for both system administrators and end users. NAS 8000 NAS 8000 High Availability Cluster was designed to the cluster, monitoring jobs currently running on a Management Server, and client agents that is useful for more accessible data, and more reliable storage, is presented. The Related Documents gives the location of additional information for the initial setup and continuing operation of each tool, a basic functional...
ClusterPack V2.4 Tutorial
Page 16
... have access to a DVD drive. References: z Printable Version Back to Top 1.1.4 Operating System and Operating Environment Requirements The key components of the page. a link to the printable version at the bottom of the HP Integrity Server Technical Cluster are: z Management Server: HP Integrity server with HP-UX 11i Version 2.0 TCOE z Compute Nodes: HP Integrity servers with HP-UX 11i Version 2.0 TCOE z Cluster Management Software: ClusterPack V2.4 The following prerequisites are assumed: z HP-UX 11i V2.0 TCOE installed...
... have access to a DVD drive. References: z Printable Version Back to Top 1.1.4 Operating System and Operating Environment Requirements The key components of the page. a link to the printable version at the bottom of the HP Integrity Server Technical Cluster are: z Management Server: HP Integrity server with HP-UX 11i Version 2.0 TCOE z Compute Nodes: HP Integrity servers with HP-UX 11i Version 2.0 TCOE z Cluster Management Software: ClusterPack V2.4 The following prerequisites are assumed: z HP-UX 11i V2.0 TCOE installed...
ClusterPack V2.4 Tutorial
Page 21
.... The minimum release versions required are listed below. z Using the HP-UX 11i V2.0 TCOE DVD, mount and register the DVD as a part of two LAN connections. z HP-UX 11i Ignite-UX z HP-UX 11i V2.0 TCOE ClusterPack depends on certain open source software which is to operate correctly. z /opt - 4GB Overview Allocate file system space on the Management Server. One connection must have Management Processor (MP) cards. On the Management Server: % /usr/sbin...
.... The minimum release versions required are listed below. z Using the HP-UX 11i V2.0 TCOE DVD, mount and register the DVD as a part of two LAN connections. z HP-UX 11i Ignite-UX z HP-UX 11i V2.0 TCOE ClusterPack depends on certain open source software which is to operate correctly. z /opt - 4GB Overview Allocate file system space on the Management Server. One connection must have Management Processor (MP) cards. On the Management Server: % /usr/sbin...
ClusterPack V2.4 Tutorial
Page 24
... NOT power up the systems, ClusterPack will set up the DHCP server. Background This document does not cover hardware details. The manager_config progr will do accidentally power the compute nodes, DO NOT answer the HP-UX boot questions. Back to run the software. It is necessary to hav serial console cable to connect the serial port on the Management Server to be running on the Management Processor to the console port on the Management Server before...
... NOT power up the systems, ClusterPack will set up the DHCP server. Background This document does not cover hardware details. The manager_config progr will do accidentally power the compute nodes, DO NOT answer the HP-UX boot questions. Back to run the software. It is necessary to hav serial console cable to connect the serial port on the Management Server to be running on the Management Processor to the console port on the Management Server before...
ClusterPack V2.4 Tutorial
Page 25
... be used for Compute Nodes. Details z Select an IP address from the same IP subnet that will be installed into the correct locations on the Management Server. As part of the cluster. % /opt/clusterpack/bin/manager_config Back to Top Step 7 Configure the ProCurve Switch Background The ProCurve Switch is accessable to the Management Server. z Connect a console to the switch z Log onto the switch through the console z Type 'set-up' z Select IP Config...
... be used for Compute Nodes. Details z Select an IP address from the same IP subnet that will be installed into the correct locations on the Management Server. As part of the cluster. % /opt/clusterpack/bin/manager_config Back to Top Step 7 Configure the ProCurve Switch Background The ProCurve Switch is accessable to the Management Server. z Connect a console to the switch z Log onto the switch through the console z Type 'set-up' z Select IP Config...
ClusterPack V2.4 Tutorial
Page 28
... the main installation and configuration driver. z Install the ClusterPack Manager software (CPACK-MGR) on the Management Server. Other dependent software pieces in the DVD drive until the "Invoke /opt/clusterpack/bin/manager_config on the Management Server to enable auto-startup of the z Using the ClusterPack DVD, mount and register the DVD as NTP server, NIS server, NFS server, Ignite-UX server, and Web server. z Install all of the dependent software components from the ClusterPack DVD: { This...
... the main installation and configuration driver. z Install the ClusterPack Manager software (CPACK-MGR) on the Management Server. Other dependent software pieces in the DVD drive until the "Invoke /opt/clusterpack/bin/manager_config on the Management Server to enable auto-startup of the z Using the ClusterPack DVD, mount and register the DVD as NTP server, NIS server, NFS server, Ignite-UX server, and Web server. z Install all of the dependent software components from the ClusterPack DVD: { This...
ClusterPack V2.4 Tutorial
Page 29
... name of the cluster, z The cluster LAN interface on the Management Server, z The count and starting IP address of the Compute Nodes in the cluster (based on your specific requirements: z If you want manager_config to drive the allocation of hostnames and IP addresses of the Compute Nodes, z Whether to mount a home directory, z The SCM admin password if SCM is no arguments: % /opt...
... name of the cluster, z The cluster LAN interface on the Management Server, z The count and starting IP address of the Compute Nodes in the cluster (based on your specific requirements: z If you want manager_config to drive the allocation of hostnames and IP addresses of the Compute Nodes, z Whether to mount a home directory, z The SCM admin password if SCM is no arguments: % /opt...
ClusterPack V2.4 Tutorial
Page 31
... also possible to access the MP as remote console access, power management, remote re-boot operations, and temperature monitoring are available by connecting a serial console device to the serial port on the back of requiring you entered about each MP and associated IP address to large clusters. Likewise, nodes removed from the clu are added to the database without configured users is generally designed for single use the information you to manually connect the to...
... also possible to access the MP as remote console access, power management, remote re-boot operations, and temperature monitoring are available by connecting a serial console device to the serial port on the back of requiring you entered about each MP and associated IP address to large clusters. Likewise, nodes removed from the clu are added to the database without configured users is generally designed for single use the information you to manually connect the to...
ClusterPack V2.4 Tutorial
Page 35
... the aliases to use for the HyperFabric interfaces. The network entity m be used to first install the drivers and/or kernel patches are needed. Use these names wh you with the extension "hyp", ftp through this network can be assigned an extension that has a HyperFabric interface. Once the clic interface is necessary to set (or change) the IP address configure the card. For clnetworks to...
... the aliases to use for the HyperFabric interfaces. The network entity m be used to first install the drivers and/or kernel patches are needed. Use these names wh you with the extension "hyp", ftp through this network can be assigned an extension that has a HyperFabric interface. Once the clic interface is necessary to set (or change) the IP address configure the card. For clnetworks to...
ClusterPack V2.4 Tutorial
Page 46
... File Server During the installation, manager_config presents the option to mount a /home directory to all of the file server that is accessible to bring up an xterm on the file server. The file server's connection to a display server that is not part of the Compute Nodes. On the Compute Node, set the DISPLAY variable to the switch should use /home on Management Server" step: z If it is enabled...
... File Server During the installation, manager_config presents the option to mount a /home directory to all of the file server that is accessible to bring up an xterm on the file server. The file server's connection to a display server that is not part of the Compute Nodes. On the Compute Node, set the DISPLAY variable to the switch should use /home on Management Server" step: z If it is enabled...
ClusterPack V2.4 Tutorial
Page 47
... network cards publicly accessible IP addresses as Compute Nodes. and the AppRS utilities (apprs_ls, apprs_clean, etc.). TCP control is restricted by disabling telnet and remsh access to prevent users from the manager. The /etc/hosts.deny file is initially configured with great care due to r jobs through the ClusterWare GUI or by setting the /etc/hosts.deny file will prevent users' access to be used...
... network cards publicly accessible IP addresses as Compute Nodes. and the AppRS utilities (apprs_ls, apprs_clean, etc.). TCP control is restricted by disabling telnet and remsh access to prevent users from the manager. The /etc/hosts.deny file is initially configured with great care due to r jobs through the ClusterWare GUI or by setting the /etc/hosts.deny file will prevent users' access to be used...
ClusterPack V2.4 Tutorial
Page 57
... the settings from an image can control what files are excluded. In this product first. A complete list of the system configuration files in their current state. The user can be used . More importantly, many of files which you would like the image to be made , make_sys_image -l may be viewed by using the command: % /opt/ignite/data/scripts/make_sys_image -x -s local Users may...
... the settings from an image can control what files are excluded. In this product first. A complete list of the system configuration files in their current state. The user can be used . More importantly, many of files which you would like the image to be made , make_sys_image -l may be viewed by using the command: % /opt/ignite/data/scripts/make_sys_image -x -s local Users may...
ClusterPack V2.4 Tutorial
Page 89
If a USERS line exists, you wish to a specific queue. Otherwise, add a line of your cluster can be determined by using the Clusterware Pro V5.1 CLI: % lsid Edit the lsb.queues file and look for a USERS line for the queue you can submit to restrict. The name of the form: USERS = Back to Top 1.8.3 Restrict user access to specific queues Using the Clusterware Pro V5.1 CLI: The file /share/platform/clusterware/conf/lsbatch//configdir/lsb.queues controls which users can add or remove users from it.
If a USERS line exists, you wish to a specific queue. Otherwise, add a line of your cluster can be determined by using the Clusterware Pro V5.1 CLI: % lsid Edit the lsb.queues file and look for a USERS line for the queue you can submit to restrict. The name of the form: USERS = Back to Top 1.8.3 Restrict user access to specific queues Using the Clusterware Pro V5.1 CLI: The file /share/platform/clusterware/conf/lsbatch//configdir/lsb.queues controls which users can add or remove users from it.
ClusterPack V2.4 Tutorial
Page 98
... troubleshooting help, please see: z Planning, installing, and updating ServiceControl Manager 3.0 http://docs.hp.com/en/5990-8540/index.html z ServiceControl Manager 3.0 Troubleshooting Guide http://docs.hp.com/en/5187-4198/index.html This will reboot the machine, hostname, and cause the machine to install from a crash After a crash, the Management Server state can be checked by running: % /opt/clusterpack/bin/finalize_config Back to Top 1.9.6 Troubleshoot SCM problems...
... troubleshooting help, please see: z Planning, installing, and updating ServiceControl Manager 3.0 http://docs.hp.com/en/5990-8540/index.html z ServiceControl Manager 3.0 Troubleshooting Guide http://docs.hp.com/en/5187-4198/index.html This will reboot the machine, hostname, and cause the machine to install from a crash After a crash, the Management Server state can be checked by running: % /opt/clusterpack/bin/finalize_config Back to Top 1.9.6 Troubleshoot SCM problems...
ClusterPack V2.4 Tutorial
Page 99
... the Management Server, using SAM or kmtune, make sure that the Kernel Configurable Parameter max_thread_proc is at least 1000. Back to Top 1.9.7 Replace a Compute Node that any shortened host names are running , uninstall it and then reinstall it exists in /etc/hosts, and that has failed with a new machine If a Compute Node fails due to add a node, I try to a hardware problem...
... the Management Server, using SAM or kmtune, make sure that the Kernel Configurable Parameter max_thread_proc is at least 1000. Back to Top 1.9.7 Replace a Compute Node that any shortened host names are running , uninstall it and then reinstall it exists in /etc/hosts, and that has failed with a new machine If a Compute Node fails due to add a node, I try to a hardware problem...
ClusterPack V2.4 Tutorial
Page 163
... Management Processor (MP) Card Interface Overview Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 3.8.1 Using the MP Card Interface 3.8.1 Using the MP Card Interface The MP cards allow for the cluster LAN port. { Enter the 'xd -r' command (reset and diagnostics) command to reset the MP card. { Enter Ctrl-B from the LAN port. z Enter the 'cm' command to access the command menu. { Enter the 'pc' command (power control) to be halted prior to using HPUX 11i V2.0): z Enter Ctrl-B from the system console (serial...
... Management Processor (MP) Card Interface Overview Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 3.8.1 Using the MP Card Interface 3.8.1 Using the MP Card Interface The MP cards allow for the cluster LAN port. { Enter the 'xd -r' command (reset and diagnostics) command to reset the MP card. { Enter Ctrl-B from the LAN port. z Enter the 'cm' command to access the command menu. { Enter the 'pc' command (power control) to be halted prior to using HPUX 11i V2.0): z Enter Ctrl-B from the system console (serial...
ClusterPack V2.4 Tutorial
Page 166
.... z Enables centralized updates of the HP-UX Operating Environment (and as part of BIOS, drivers, and agents across multiple ProLiant servers with Service Pack 1 or later. z Enables secure management through automated event handling. z For Linux: Mozilla 1.7.3 or later. For additional information about the configuration, management, or general troubleshooting, please refer to the HPSIM Technical Reference: http://h18013.www1.hp.com/products/servers/management/hpsim/infolibrary.html Back to Top 3.9.4 How to run...
.... z Enables centralized updates of the HP-UX Operating Environment (and as part of BIOS, drivers, and agents across multiple ProLiant servers with Service Pack 1 or later. z Enables secure management through automated event handling. z For Linux: Mozilla 1.7.3 or later. For additional information about the configuration, management, or general troubleshooting, please refer to the HPSIM Technical Reference: http://h18013.www1.hp.com/products/servers/management/hpsim/infolibrary.html Back to Top 3.9.4 How to run...