Pages enfant
  • Solaris 10 - Live upgrade procedure
Aller directement à la fin des métadonnées
Aller au début des métadonnées

Description

This procedure describe how to update a solaris using liveupgrade in our environement. 

THIS MAY NOT BE SUITABLE FOR YOUR ENVIRONEMENT

Pre-requisite

A master host with NFS share and which include solaris distribution lu patch and so on (check the procedure). 

Update live upgrade package and patch

Remove and upgrade package

The Solaris Live Upgrade packages are SUNWlucfg, SUNWlur, and
SUNWluu, and these packages must be installed in that order.

If they are already installed remove them

pkgrm   SUNWluu SUNWlur SUNWlucfg

Then reinstall package

cd /net/master.example.net/ojd/SunOS_5.10_i86pc/Solaris_10/Product && pkgadd -d . SUNWlucfg SUNWlur SUNWluu

SunOS_5.10_i86pc is a symbolic link and should point to the latest solaris update (check on oracle support).

To determine what is the update for an iso, issue the following command 

# cat Solaris_10/Product/SUNWsolnm/reloc/etc/release
                    Oracle Solaris 10 8/11 s10x_u10wos_17b X86
  Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
                            Assembled 23 August 2011 

Install lu patchset

There's a number of patch require for liveupgrading. This patchset need to be install prior to run any liveupgrade command.

For x86

cd /net/master.example.net/ojd/10_x86_lustarter_patchset && ./installpatchset --lustarter

For sparc

cd /net/master.example.net/ojd/10_sparc_lustarter_patchset && ./installpatchset --lustarter

Prepare system

  1. Remove Suplementatry swap device (swap1, swap2 and so on).
  2. Unnecessary data and old application data in opt (if not use)
  3. Rename zfs files system with special character in it --> "."
  4. Remove any blank line in /etc/vfstab
  5. Create auto-registration file (see below)
# echo "autoreg=disable" > /var/tmp/no-autoreg

Make sure your have not mounted any NFS (/net) before you upgrade.

Main procedure

Create ABE

Make sure your have not mounted any NFS (/net) before you upgrade.

We use the name of the update for name of BE and ABE (check /etc/release)

bash-3.00# lucreate -c $(head -1 /etc/release|awk '{print $4}') -n s10x_u10wos_17b
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <s10x_u9wos_14a>.
Creating initial configuration for primary boot environment <s10x_u9wos_14a>.
INFORMATION: No BEs are configured on this system.
The device </dev/dsk/c1t2d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <s10x_u9wos_14a> PBE Boot Device </dev/dsk/c1t2d0s0>.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <s10x_u10wos_17b>.
Source boot environment is <s10x_u9wos_14a>.
Creating file systems on boot environment <s10x_u10wos_17b>.
Populating file systems on boot environment <s10x_u10wos_17b>.
Analyzing zones.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <system/ROOT/s10x_u9wos_14a> on <system/ROOT/s10x_u9wos_14a@s10x_u10wos_17b>.
Creating clone for <system/ROOT/s10x_u9wos_14a@s10x_u10wos_17b> on <system/ROOT/s10x_u10wos_17b>.
Mounting ABE <s10x_u10wos_17b>.
Generating file list.
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <s10x_u10wos_17b>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <s10x_u9wos_14a>.
Making boot environment <s10x_u10wos_17b> bootable.
Updating bootenv.rc on ABE <s10x_u10wos_17b>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10x_u10wos_17b> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <s10x_u10wos_17b> in GRUB menu
Population of boot environment <s10x_u10wos_17b> successful.
Creation of boot environment <s10x_u10wos_17b> successful.

Upgrade BE

Run liveupgrade

bash-3.00# luupgrade -u -k /var/tmp/no-autoreg -n s10x_u10wos_17b -s /net/master.example.net/ojd/SOL_10_811_X86/

No entry for BE <s10x_u10wos_17b> in GRUB menu
Copying failsafe kernel from media.
62128 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </net/master.example.net/ojd/SOL_10_811_X86//Solaris_10/Tools/Boot>
INFORMATION: Auto Registration already done for this BE <s10x_u10wos_17b>.
Validating the contents of the media </net/master.example.net/ojd/SOL_10_811_X86/>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <s10x_u10wos_17b>.
Checking for GRUB menu on ABE <s10x_u10wos_17b>.
Saving GRUB menu on ABE <s10x_u10wos_17b>.
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE <s10x_u10wos_17b>.
Performing the operating system upgrade of the BE <s10x_u10wos_17b>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Restoring GRUB menu on ABE <s10x_u10wos_17b>.
Updating package information on boot environment <s10x_u10wos_17b>.
Package information successfully updated on boot environment <s10x_u10wos_17b>.
Adding operating system patches to the BE <s10x_u10wos_17b>.
The operating system patch installation is complete.
ABE boot partition backing deleted.
ABE GRUB is newer than PBE GRUB. Updating GRUB.
GRUB update was successfull.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot
environment <s10x_u10wos_17b> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot
environment <s10x_u10wos_17b> contains a log of cleanup operations
required.
WARNING: <10> packages failed to install properly on boot environment <s10x_u10wos_17b>.
INFORMATION: The file </var/sadm/system/data/upgrade_failed_pkgadds> on
boot environment <s10x_u10wos_17b> contains a list of packages that failed
to upgrade or install properly.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment <s10x_u10wos_17b>. Before you activate
boot environment <s10x_u10wos_17b>, determine if any additional system
maintenance is required or if additional media of the software
distribution must be installed.
The Solaris upgrade of the boot environment <s10x_u10wos_17b> is partially complete.
Creating miniroot device
mount: /dev/lofi/1 is already mounted or /net/master.example.net/ojd/SOL_10_811_X86/Solaris_10/Tools/Boot is busy
ERROR: Unable to mount failsafe archive
Configuring failsafe for system.
WARNING: Default failsafe is missing boot settings file </net/master.example.net/ojd/SOL_10_811_X86//Solaris_10/Tools/Boot/boot/solaris/bootenv.rc>.
cp: cannot create /net/master.example.net/ojd/SOL_10_811_X86//Solaris_10/Tools/Boot/boot/solaris/bootenv.rc: No such file or directory
Failsafe configuration is complete.
Installing failsafe
Failsafe install is complete.

Update BE with last bundle of patch

Install pre-requisite patch for upgrade on live sys.

cd /net/master.example.net/ojd/patch_cluster_recommended_x86 && ./installcluster --apply-prereq --s10patchset 

Install bundle of patch

cd /net/master.example.net/ojd/patch_cluster_recommended_x86 && ./installcluster -B s10x_u10wos_17b --s10patchset

Activate the BE

bash-3.00# luactivate -n s10x_u10wos_17b
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <s10x_u9wos_14a>
A Live Upgrade Sync operation will be performed on startup of boot environment <s10x_u10wos_17b>.
WARNING: <10> packages failed to install properly on boot environment <s10x_u10wos_17b>.
INFORMATION: </var/sadm/system/data/upgrade_failed_pkgadds> on boot
environment <s10x_u10wos_17b> contains a list of packages that failed to
upgrade or install properly. Review the file before you reboot the system
to determine if any additional system maintenance is required.

Setting failsafe console to <ttya>.
Generating boot-sign for ABE <s10x_u10wos_17b>
Generating partition and slice information for ABE <s10x_u10wos_17b>
Copied boot menu from top level dataset.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:

     zpool import system
     zfs inherit -r mountpoint system/ROOT/s10x_u9wos_14a
     zfs set mountpoint=<mountpointName> system/ROOT/s10x_u9wos_14a
     zfs mount system/ROOT/s10x_u9wos_14a

3. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:

     <mountpointName>/sbin/luactivate

4. luactivate, activates the previous working boot environment and
indicates the result.

5. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <s10x_u10wos_17b> successful.

Check that BE is now activated

bash-3.00# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
s10x_u9wos_14a             yes      yes    no        no     -
s10x_u10wos_17b            yes      no     yes       no     -

Add system parameters

TO BE CLARIFIED (HDS)

Mount ABE and add requisite system parameter in /etc/system

Restart the server

Do not use command #reboot# but use #init 6# instead

Post upgrade

Fix swap device

if necessary

Following issue with swap we end up having 2 swap device. Fix this by deleting all swap device and create a new one with the approriate size.

swap -d  /dev/zvol/dsk/system/swap
zfs destroy system/swap
zfs create -V 32g  system/swap
swap -a /dev/zvol/dsk/system/swap

Remake sym link.

cd /
ln -s /opt/home

Delete old BE

You need to remove all trace of the last system image to avoid error while booting. you can keep it for a while for rollback but don't forget.

amussi@peseta $ sudo ludelete -n s10x_u6wos_07b
System has findroot enabled GRUB
Checking if last BE on any disk...
BE <s10x_u6wos_07b> is not the last BE on any disk.
Updating GRUB menu default setting
Changing GRUB menu default setting to <1>
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10x_u10wos_17b> as <mount-point>//boot/grub/menu.lst.prev.
File </etc/lu/GRUB_backup_menu> propagation successful
Successfully deleted entry from GRUB menu
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.

amussi@peseta $ sudo lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
s10x_u10wos_17b            yes      yes    yes       no     -

Upgrade zfs pool

You can now update system zpool

amussi@peseta $ sudo zpool upgrade  system
This system is currently running ZFS pool version 29.

Successfully upgraded 'system' from version 10 to version 29

For backup node you need to wait all backup node are upgraded.

  • Aucune étiquette