Tags:
create new tag
view all tags

SL7Project Plan

Previously, when porting to new platforms, we have incrementally ported components etc on top of machines built by the upstream's native installer. Once the core components to install and keep a machine LCFG managed have been ported, we work on the LCFG install system and attempt an LCFG install of the new platform.

  1. Create a project blog
  2. Create a project topic on LCFG wiki
  3. Install SL7 (1 day)
    • Standard SL7 desktop machine
    • Get onto the Informatics network.
    • Authentication with kerberos
    • Directory services from ldap.
    • AFS filesystem access.
    • Set up Mock for automated builds.
    • Configure PkgForge (buildhost on SL6 64bit building SL7 packages)
  4. Package infrastructure (0.5 day)
    • Set up a site mirror of SL7 (from which buckets will be populated and the package lists generated)
    • Create repository directory structure
    • Populate base, updates (if necessary)
  5. Package lists (5 days)
    • Create lists for SL7 base, updates, kernel, desktop (attempt to base on kickstart groups)
    • Create lcfg_sl7_lcfg package list and update to reflect SL7 packages
    • Create installbase package list
    • Modify pkglist tools to support SL7
  6. Essential headers (0.5 day)
    • Create any essential headers for the platform
    • Add basics to lcfg/defaults/profile.h and lcfg/defaults/updaterpms.h
    • Prepare inf layer headers
  7. Essential profiles (0.5 day)
    • installbase-sl7 profile
  8. Auto-build and run tests for all LCFG components (2 days)
  9. Create basic development platform (once Mock configured) (3 days)
    • Develop Inf level to create a basic profile
    • lcfg-utils and lcfg-utils-devel
    • lcfg-pkgtools & lcfg-pkgtools-devel
    • perl-LCFG-PkgTools
    • perl-LCFG-PkgUtils
    • perl-LCFG-Utils
    • lcfg-ngeneric
    • lcfg-client
    • lcfg-file
    • lcfg-sysinfo
    • lcfg-logserver
    • lcfg-authorize
    • lcfg-om
    • updaterpms
    • lcfg-updaterpms
    • pkgsubmit
  10. Add PXE configuration for SL7 based on SL6 (0.5 days)
  11. Produce any SL7 dependent RPMs ( 2 days )
    • lcfg-defetc-sl7 (needed by lcfg-auth.
    • lcfg-release-sl7
    • Add SL7 template to lcfg-etcservices
    • Configure LCFG component ordering appropriately (via systemd dependencies)
  12. LCFG management of natively installed machine (? days)
    • systemd config via lcfg-systemd
    • LCFG components (from above section) managed by lcfg-systemd (and modified as appropriate)
      • lcfg-client
      • lcfg-updaterpms
      • lcfg-cron
      • lcfg-lcfginit
      • See EL7Components 13. Installation systems (4 days)
    • Attempt install of base inf machine using SL6 installer
    • Create installroot package list
    • Build, install and test lcfg-buildinstallroot
    • Produce CD iso installer
    • Deploy PXE installer
  13. Port MPU managed resources to the DICE level. (3 days)
    • dice theme for display manager (cc)
    • Provide live emergency headers and package lists
  14. Document new platforms (2 days)
    • Developer FAQ
    • known issues
    • Hardware support
    • (systemd documentation)
    • Documentation for users - changes
  15. LCFG devel tools
    • Package LCFG devel tools for SL7 (may be earlier if required)
    • add pkgforge client (1 day)
  16. Check for issues on supported hardware types (can be CSO) and record in LCFG wiki ( 5 days )
  17. Reconsider systemd/LCFG component interaction
  18. DICE level packages for which MPU is responsible
    • dice-authorise
    • sleep
    • autoreboot
  19. Re-base (eg package lists) to SL7/CL7
    • Package lists
    • Updating tags in packages to record EL7 support
  20. Remove inf layer dependence on devel buckets and test (1 day)
    • MPU components
    • installroot
    • other units' components.
  21. Add SL7 to the list of supported platforms on the LCFG website.
  22. DIYDICE (ascobie)
  23. Virtual DICE
  24. Server support
    • fibre?
    • bonding
    • VLANs
    • multipath?
    • apacheconf
    • nfs
    • amd (or autofs?)
    • remctl

PLUS the following

  • Every unit is configuring nagios (and sysinfo resources) for their servers in different ways. We should agree on a standard approach and then create some headers to make it all work sensibly. (Strictly speaking a DICE only issue)
  • Consider whether we still want to use prelink - there's a view that this no longer improves performance as it once did and it gets in the way when investigating suspected compromised machines.

-- Main.ascobie - 2013-11-27

Topic revision: r8 - 2015-03-20 - cc
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback