Wednesday, 28 September 2011

System Centre Configuration Manager - Discovery

Now that SCCM 2012 is installed, need to configure it to work with the network to discover the clients and networks so that it can start to be useful. So first port of call is the new Discovery Methods section.

First one that I notice is Active Directory Forest Discovery.



This was run and after checking the status messages


It then apeared in the Active Directory Forest section of the Hierarchy.


The older AD Security Group, System, System Group, & User Discovery methods were configured for the Proof of Concept domain, and within  a couple of minutes there was plenty of information in the Asset lists.


Finally a Boundary Group was created from the IP and Site Boundarys discovered from the AD Forest discovery to scope the SCCM server



As this is a test lab, initaly only client push for new agents will be used.
So first was to install the agent on the site server.


Tuesday, 27 September 2011

System Centre Configuration Manager

As this blog really does not have a purpose bar somewhere to hold ramblings, i thought i would a quick series on SCCM 2012 and Windows 8, as these will start to appear in Windows based network over the next couple of years and it really becomes my holding place for my own experiences with these technologies.

Rather strapped for modern hardware and not wanting to virtualise the SCCM product, I was able to get a HP DL360 G5 as a test server (8Gb Ram, couple of processors and a reasonable set of disks - configured as a System drive (C) 136Gb, and Data Drive (D) 410Gb. Windows 2008 R2 was installed, network configured (4 NIC's teamed to a single virtual NIC with static IP). The WDS and WSUS roles were installed (along with required features) but NOT configured and the .Net 3.5.1 feature was also installed.

SCCM 2012 Beta 2 was downloaded, and I was informed that .Net 4.0 was also required so this was downloaded from Microsoft along with the latest security updates, then SCCM was attempted to be installed again.

As this was a test system, I went with a Primary Site Install, but did not click the Typical check box as I have always been advised against this on previous version of SMS/SCCM. At the download screen I originally went with a path that had a space, but replacing that with a hyphen it allowed me to progress to seeing the extra updates be downloaded.

I then got to the first gotcha, you must have SQL Server already installed, So installation cancelled and  SQL install started. SQL must be 2008 SP1 with CU10 (it does not support 2008 SP2 or R2). SQL 2008 Installation does not appear to work in remote desktop mode, so a switch to the KVM switch and another attempt.

The following SQL 2008 feature were installed.
  • Database Engine Services
    • SQL Server Replication
  • Reporting Services
  • Client Tools Connectivity
  • Management Tools - Basic
    • Management Tools - Complete
Default instance with the root moved to the larger D drive. Services were set to run with local system accounts. Rest of configuration used the defaults. Due to the compatibility issues with Windows 2008 R2 and SQL 2008, Service Pack 1 was installed immediately after the SQL install had completed. SP1 demands a reboot, so the 2 ports for SQL were opened in the firewall (1433 for Database Access and 80 for Reporting services) and ensured that Named pipes and TCP/IP were configured for SQL use, then the reboot then the installation of the CU10 updates.

The AD had already been extended for SCCM schema extensions, as SCCM2007 had previously been installed.

Then create and install the required certificates on the CA as per the guidance on Technet but that then hit a problem as the CA was running Windows 2003 but the client was 2008, so the web enrollment would not work so had to use certreq commands instead, NOTE that at this point you need to be logged into the SCCM server with a domain account to ensure that you can access a AD CA.

On trying to enable HTTPS access via the IIS console, this was not installed so the following role features where added to the IIS role.
  • Common HTTP Features
    • Directory Browsing
    • HTTP Errors
    • HTTP Redirection
  • Health and Diagnostics
    • HTTP Logging
    • Logging Tools
    • Request Monitor
    • Tracing
  • Performance
    • Static Content Compression
  • Management Tools
    • IIS Management Console
    • IIS 6 Management Compatibility
      • IIS 6 Metabase Compatibility
      • IIS 6 WMI Compatibility
Restarted the SCCM installer, to be informed that BITS and RDC needed to be installed, added these from Features and tried again. It stopped very quickly at the start of the installation, where I noticed that SQL server had stopped. New Certificates were added to SQL server and confirmed that they were running, and after another reboot tried again. Then SQL was in a real muddle see this blog (nickstips.wordpress.com/2010/09/08/sql-ssl-and-sql-server-2008-service-doesnt-start-error-code-2146885628/)

And then it stopped again, it moved its own self signed certificate in and stopped the SQL server, so this new certificate was fixed as per the blog above and SQL started and the SCCM started to move on.

And then you wait. a good while later I have a working admin console. next I will start to look at the guided test cases and especially the Operating System deployments.





Monday, 6 June 2011

Beware of colleagues bearing gifts

Sometimes an Architect has to get out of their lofty white towers and get their hands dirty with some real work, and that has been my lot over the last few weeks.

A colleague being helpful and resourceful managed to obtain a couple of SAN switches and some disk storage that we could use in the test lab, but by the time i got involved, couldn't remember anything about where they were from or what any of the passwords were.

So time to investigate, The switches were labelled with HP StorageWorks SAN Switch 2/16V which in reality are just rebadged Brocade 3850. So after a couple of attempts at finding a reset button (no such luck) I connected to the serial port and pulled the power to force a reboot. On booting you are given a option to boot into a PROM state where I found that the recovery password had not been set (they had now) and a few Linux type commands
printenv
This produced the PROM environment settings, confirming the memory OS loader location.
boot MEM()0xF00000 -s
Manually starting the OS from the memory location shown before
mount -o remount,rw,noatime /
mount /dev/hda2 /mnt
/sbin/passwddefault
This reset all the built in accounts to there factory defaults
reboot -f
Rebooted the switch then we were able to get into the Fabric OS  using the default root account and password (fibranne - if you have the same switches as me or google is your friend)

So finally we have 2 working switches all be it with old versions of the Firmware so more hands on work for me as we go through the upgrade processes, hope the lab is worth it when it is done.

Monday, 28 March 2011

Internet Protocol version 6

Over the past few weeks I have been starting to look at IPv6 and what we need to do to support it in the large organisation that I work for. Now we have strong limits on Internet access and a healthy bank of routable IPv4 addresses so the main first thought was that we didn't need to use it.

But this seemed a little short sighted for me so I have been continuing to review our options, as such we are probably at a stage where we should be planning an IP address scheme and then only implementing it as and when other projects are available to hide the costs, as there is almost no business benefit that could be gained directly to cover the costs involved in any IT change.

Now for those that don't know (or as a reminded of to those that do) IPv6 has been designed to be allocated in a hierarchical fashion, with IANA at the centre having given out blocks of addresses to the regional agencies (such as RIPE NCC for us here in Europe) and these are in turn given to local agencies and Internet service providers. So this where my thoughts started where should we get our allocation from. But I work for a Public sector organisation (in fact a UK police force) and we are being told to share more with our colleagues which will require firewall and router rules to support this. Then it hit me, what we needed was for all the other forces to have a shared set of addresses, so that 1 rule would allow me to say that this traffic is from someone in a police force somewhere in the country but we would not have to care which.

Then i took this a stage forward, why should the UK government not have a single subnet for all its agencies and departments, so with a single line in a firewall rule set we could make a set of systems available to any government agency, they would still need Role Based Access Control (the right for a specific user to access the system) but we could trust that that packet has come from a UK government site and let it passed the firewall.

But the idea is the easy bit, the UK government IT is a maze of Small, Medium & Enterprise IT departments with little collaboration or communication between them so to get someone to oversee the management of IPv6 addresses will be a challenge but he we go, it will make IP easier for all the public sector and should reduce some potential costs, which in these times of cuts should be appreciated.