THE ANATOMY OF AN IT INFRASTRUCTURE DISCOVERY

ediscovery

“The greatest obstacle to discovery is not ignorance – it is the illusion of knowledge.” Daniel J. Boorstin

The business process discovery and the IT process discovery are far less famous that the most important discoveries of the mankind, but I want to highlight that the quote above has the last part which applies perfectly to the business and IT processes discovery and that is the illusion of knowledge!

The IT infrastructure discovery is one out of many IT discoveries, which includes: IT processes, applications, data and infrastructure.

Paradoxically, the IT infrastructure is the Cinderella of the IT domain and at the same time the only “real”, tangible part of it, because regardless of where your infrastructure resides (in a data center or a cloud), ultimately is made of physical servers, storage arrays, switches and routers.

The infrastructure discovery is usually done together with data and applications discovery; sometimes it is a stand-alone process if, for example, the infrastructure is obsolete and a refresh process is needed.

Experience will tell you that a lot of IT departments do not have an up to date infrastructure inventory database, so the best practice is to acknowledge the customer asset database and to run the discovery process from scratch.

Nowadays, there are many infrastructure assets audit applications, lots of them with agentless deployment, scanning the infrastructure, servers, virtual machines, storage arrays, switches, routers, load balancers, etc. via almost all the IT protocols available (network, storage, computer, etc.). One has to input the IP addresses range and the admin credentials for the scanned domains and the audit application will collect all the available information for every asset. It is also recommended to run the software audit on the users’ subnets, getting the audit data for the PCs, laptops printers, scanners and mobile devices as well.

Again, experience will tell you that the software audit results will cover 90-95% of the assets, so a physical audit is highly recommended. With the help of several SMEs, the physical audit will cover every asset in every cabinet, recording the information for each asset form the front and the back of the cabinet, with asset name, serial number, network and fiber connections, etc.

The combined results of the software audit and the physical audit will be used to create an artefact called the discovery report which will have a detailed diagram of each cabinet, front and rear, tables with assets, servers, storage arrays, network equipment. The tables will have details about the make, model, serial number, name, compute power for each asset, also the manufacture year for each asset. If a proper naming convention was used, the names of the assets will show information regarding the operating system, applications running of that server and if the server belongs to the Prod, Dev, Test or DR environments.

One will be surprised to learn that sometimes a Prod server is actually a desktop PC under somebody’s desk, built many years ago as a test machine and forgotten by everybody!

The discovery report will present a clear and accurate view of the present state of the infrastructure without any recommendations regarding the future state.  The infrastructure discovery report will play a very important role for the next processes, the data discovery and the applications discovery which will be presented in a future posting.

Virtualization and cloud means the end of infrastructure discovery process?

No, because ultimately, the virtual machines will run on physical servers and physical storage arrays and this is also true for the virtual machines running in cloud. Only if you are running your applications on a multi-tenant cloud, the physical audit will not concern you per se, but it will be the cloud provider job, because is nothing worse that providing a cloud solution which is running on old hardware!

For details about the infrastructure discovery process contact me:

Ilie.mihut@dashinternational.com.au

BACKUP IS DEAD, OR IS IT?

 

remote-offsite-backup-recovery

No, I will not pretend that the backup is dead! It is just becoming something else.

The old days:

Traditionally, a backup solution involved some kind of media (tapes, disks) where you copy the important Prod data and send the media for safe-keeping in an off-site vault. The restore process, which is the reverse of the backup process, meant recalling a tape or a set of tapes, and restore the data on the original place or an alternate restore point. Restoring large amount of data was not a quick procedure, several hours if you were lucky. It is one of the reasons applications owners and DBAs were keeping several sets of Prod data on the Prod storage, just in case!

Present days:

The amount of data grew exponentially in the last five years, and, thankfully the technology has advanced in a similar fashion. The backup concept is gradually replaced with the total data protection and management concept meaning that the data will be analyzed once and it will be determined which data to backup, which to archive and what amount of indexing is needed for search purposes. The majority of the storage arrays have snap shot capabilities and with an intelligent backup solution integrating and controlling the snap shot process, the backup and restore are much faster and require less space on the arrays. The synthetic backups and “incremental forever” method together with deduplication and global deduplication, are massively reducing the size of the backups as well.

On top of this, the backup to the cloud as a service is offered by all major cloud providers, although if your backup is a few hundred terabytes in size, the bandwidth between the Prod site and the cloud can be prohibitive.

Based on the above facts, an enterprise backup solution (when properly implemented) is a robust and very dynamic data protection and management solution, with very fast backup and restore Server Level Agreements (SLAs), with load based variable backup windows and eliminating the duplicated data at an enterprise/global level, with an archiving and indexing component smartly allowing the archiving of legal documents and messages for the 7-10 years’ period, etc. because remember, backup is for operational recovery not long term retention of data!

900px-the_sounds_of_earth_record_cover_-_gpn-2000-001978_0

Future days:

The logical step in data protection will be the Data Protection Architecture concept allowing the data centers do deliver data protection as a service with the flexibility to allow different modules to be plugged in at the appropriate time to address specific user or application requirements. With a data protection architecture there will be no lock to a single application or storage device, but a mixture of various data protection tools from stand-alone software, to storage systems and dedicated backup appliances to be used as a service and managed as a single entity.

A Data Protection Architecture solution should be flexible, with data protection utilities being able to write direct to the storage, should keep the data in its native format and have a governing body to make sure all the required components work together to simplify management. The goal will be to eliminate the user interface from the data protection tools and applications and let them embed into the central data protection architecture framework.

As stated at the beginning of this article, backup is not dead and will not be for a long time, it is just that backup as we know it is evolving into a data protection architecture.

 

A HIGH LEVEL INTRODUCTION TO THE DARK SIDE…. OF THE INTERNET

 

darth-vader

As law abiding Internet users, we generally stay on the “legal-official” side of the Internet (ClearNet). But, I always believed that knowledge is power, so, knowing a bit about the Dark Side of the Internet is going to benefit you because you will be aware of the traps and dangers and you will be better prepared to avoid them. Ultimately, Darth Vader is just Yoda with a black helmet and a different funny voice!

So, we are aware of the Visible Net or the Surface Net or ClearNet and we are using it every day:

  • Search engine like Google, Bing, Yahoo, Baidu, Dogpile, HotBot, Metacrawler, etc.
  • Social media platforms like Facebook, Twitter, LinkedIn, Google+, You Tube, Blab, hi5, Friendster, Meerkat, MyLife, Periscope, Plaxo, Xing, Flickr, iTunes, MySpace, Vimeo, Instagram, Pinterest, Reddit, Scribd, SlideShare, Wikipedia, etc.
  • Email services like Gmail, Outlook, Yahoo Mail, AOL Mail, Zoho Mail, Mail.com, Yandex Mail, Inbox.com, etc.

It is time to introduce the Darknet, the Dark Web and the Deep web (definitions from Wikipedia):

  • Darknet is an overlay network, only accessible with specific authorization, configurations and software, generally using non-standard communication protocols and ports.
  • Dark web is the content on the Darknet and the overlay networks using the public Internet but require specific authorization, configurations and software for access.
  • Deep Web is a part of the World Wide Web with non-indexed content by the search engines.

TORI2p6

Figure 1 Darknet access and components

 

The above diagram is a high level presentation of the Darknet components and some of the access ways.

There are several programs used to get to the Darknet. I will mention two of them: I2p and Tor.

  • I2P or ‘The Invisible Internet Project’ is an anonymous peer-to-peer network. It allows users to send data between computers running I2P with end-to-end encryption using unidirectional tunnels and layered encryption. Because the limited number of out proxies to the Internet, I2p is best for peer-to-peer file sharing.
  • Tor or ‘The Onion Router’ is an anonymous internet proxy directing traffic through a worldwide volunteer network of thousands of relays. Tor wraps messages in encrypted layers and sends them through a bi-directional circuit of relays through the Tor network. Tor also provides a central directory to manage the view of the network. Because of the issue with the trust of exit nodes, Tor is best for anonymous out proxing to the Internet.

The Darknet Market Places and the Darknet/Clearnet Market Places (several mentioned in the diagram above) are web sites where illicit activities are taking place, trading buying and selling any type of goods and digital items paid with bitcoins or other kind of cryptocurrencies: drugs, guns, information, child pornography, assassins, malware, ransomware, DDoS, security, anti-security code, access to government sites, LinkedIn accounts and passwords, etc. The existence of many of these sites is ephemeral, not because the government agencies are taking them down, but because of competition and the fight between various groups of “dealers”.

It is common sense that accessing the Darknet Market Places is a dangerous thing to do, but one is free to choose this path at his/her own risk.

If one really wants to find this places can start with Reddit, DeepDotWeb, TheHiddenWiki.org or DNstats.net and look for lists of hidden services or .onions.

There are several Darknet search engines. Two popular ones are Torch (http://xmh57jrzrnw6insl.onion/) and Grams (http://grams7enufi7jmdl.onion/) and they will perform Google like functions on the Darknet.

As mentioned at the beginning of this article, knowledge is power, it is good to know about the Darknet in order to be able to protect yourself, but please do not be seduced by the Dark Side!

P.S. Below are links to a fantastic blog presenting the Darknet (I used information from their blog in the article) and three Darknet related articles for a more detailed view if interested.

(https://blog.radware.com)

iceberg-darkweb-darknet

(http://fossbytes.com/welcome-to-the-darknet-the-underground-for-the-underground/)

deep-web-iceberg

(https://privacyliving.com/2016/02/12/darknet-dark-web-the-tor-browser/)

dark01

(http://www.golkondanews.com/the-evil-darknet/)

 

THE INFRASTRUCTURE ARCHITECTURE CHALLENGE

infrastructure

 

 

Very often, the real life scenario for a business transformation starts with the infrastructure architecture because the management (CFO, CIO, CFO) had already chosen the future business model and the Infrastructure Architect had to come up with the best fit to match the management “model”. But, wait, it gets worst because in real life scenarios the CIO or the IT Director will have an Infrastructure architecture “solution”, and you, the Infrastructure Architect have to make it happened!

It is a compromise, you have to come up with the best possible design, to match the management requirements and to stay on budget, the only possible variation from this solution is when you find some technical details which can be a show stopper like hardware incompatibility or applications incompatibility with specific platforms, etc.

Now, let’s set aside the frustration created by the above mentioned issues and concentrate on the Infrastructure Architecture. It is also known as the Technical Architecture and basically deals with the structure and behavior of the technology infrastructure and covers the client and server nodes of the hardware configuration, the infrastructure applications that run on them, the infrastructure services they offer to applications, the protocols and networks that connect applications and nodes.

Infrastructure architecture model

Figure 1 Infrastructure Architecture Model

 

The above figure presents a possible Infrastructure Architecture Model where you have the building block of the Infrastructure Architecture (Business Functions, Architecture, Technology, Organization, Financial and Training) presented in three phases:

Current Environment (PMO) – where the blocks are randomly distributed.

Transition Environment (TMO) – where the blocks are starting to get in order.

Target Environment (FMO) – where the blocks are organized in the best possible working solution.

I have chosen the above model and the multi colored blocks because it should be a constant reminder for all architects that the Target Environment you have reached today is the Current Environment of tomorrow and the building blocks will gradually become randomly distributed again!  As the below quote mentioned: “The future is here, it’s just not widely distributed yet!”

 

arch

 

Figure 2 Quote from John Sing, Director of Technology presentation

 

Finally, your project is finished, your new Infrastructure Architecture is done, it is a perfect cube with multi-colored little cubes perfectly interlocked.

THE GERIATRIC IT PEOPLE

 

geriatric1

 

For the last 15 years we were invited by our friends, M and V to their birthdays’, to their children birthdays’ parties, to Christmas lunches, to Australia Day lunches etc. We meet their parents, aunties and uncles, all of them in their late seventies, early eighties, all of them very active, eating, drinking and enjoying the parties. As a joke, I used to call these events, “The Geriatric Parties”, although I admire very much their vitality, intelligence, wisdom and mainly their “joie de vivre”!

I am going to refer to the IT people in their late fifties, early sixties because I am in both: in IT and early sixties. Now, the majority of us are employed full time or contracting, as project managers, database administrators, solution architects, infrastructure specialists, applications owners, etc. and it seems perfectly normal to be part of the IT workforce. In my opinion there are only two issues with this picture:

  • If you are made redundant or your contract comes to an end it is very hard to find a new job/contract. Normally you will believe that with your experience in IT, with the knowledge accumulated in 35-40 years of work will be pretty easy to get the next job, but it is not. You will get replies from the employment agencies or directly from the employing companies, stating that you are over qualified for the position or that they are looking for a specific skill that you do not have, but the truth is very simple: you do not get the job because you are in the late fifties early sixties bracket and you are classified as a geriatric!
  • The market for very short contracts/consulting jobs is almost inexistent and I strongly believe that many companies will greatly benefit from such jobs. Let me explain: a lot of IT specialists in my age bracket will be very interested in short term consulting jobs, where they can share their work experience and can be very valuable assets, but at the same time they can enjoy one or two months breaks between contracts.

There are many solutions for the issues mentioned above; I just want to highlight several:

  • Educating the business owners, CEOs, stakeholders about the existence/presence of this important senior workforce, via conferences, emails, articles, social media
  • Contact your parliament and senate members and request a national program, government sponsored for senior IT personnel
  • Find investors, brave enough to finance a “senior” only IT company!

And if you do not believe me, just watch a fantastic movie starring Robert de Niro called: The Intern”!

I was called “an aging rock star” or as my T-shirt mentioned: “a geriatric hippie” and I am happy to be both, but I know that I can add value and experience to any IT company interested of having an enterprise architect as a consultant, knowing that I am also an infrastructure architect, a data and applications architect, a backup architect and a decent UNIX/Linux SME.

I can be contacted at: ilie.mihut@dashinternational.com.au

THE ENTERPRISE ARCHITECT DILEMMA

 

Architect

 

 

I choose the above picture from the Matrix Reloaded movie and yes, he is The Architect. The reason for choosing him is because there is a serious resemblance between The Architect and an Enterprise Architect (EA). The Architect was running sixth iterations of The Matrix in order to get the perfect one and in a similar fashion, the Enterprise Architect is running cycles of a business solution, starting a new cycle every time the business is having an issue or is diverting from the initial parameters.

Unfortunately, in Australia the perception of what is the role and the attributes of an Enterprise Architect is not the right one, and here is my dilemma: Should I accept an Enterprise Architect role answering to the IT Director? After careful consideration, my answer will be no, because a yes answer will negate the very definition and role of an Enterprise Architect. The EA and his team should answer to the CEO or the Board of Directors not to the IT Director (or CIO).

What an Enterprise Architect does? Amongst other things, he/she will look and analyze your business as a whole, will assess the issues and determine the business needs and will come up with a business solution for the future. To implement the business solution and reach the future state, the IT has to be aligned to the business needs and goals and now, the EA Team (Data Architect, Application Architect, Infrastructure Architect, Network and Network Security Architect, Mobility Architect) will design specific solutions for each mentioned domain in order to achieve that goal.

When the whole solution is designed, reviewed and approved, the IT components (data, applications, infrastructure, network, security etc.) can be implemented by the IT Department or by a third party entity based on the needed skills and qualifications.

I strongly believe that the Australian businesses’ CEOs and Board of Directors, need to be informed and educated regarding the concept of Enterprise Architecture and the role of an EA via seminars, articles papers published in CEOs magazines etc. Only when they know what an EA is and how the Enterprise Architecture framework can help a business, only then the full benefits of such a methodology will determine rapid business growth, flexibility in a challenging market and the back end of modern and very adaptable IT.

I will close with the memorable words of The Architect: “Hope, it is the quintessential human delusion, simultaneously the source of your greatest strength, and your greatest weakness!”

PS For more details about the subject, or having me as a consultant: ilie.mihut@dashinternational.com.au

THE ANATOMY OF A DATA CENTRE TRANSFORMATION PROGRAMME

 

37653288
37653288

How do you end up starting a DCT programme?

Here are some possible scenarios:

  • The business goals and requirements are no longer properly supported by the existing IT infrastructure.
  • Your company has acquired one or several companies with their own data centres and IT infrastructure.
  • Dramatic innovations in your business domain require rapid adaptation to new technologies and implicitly new IT technologies, super-fast networks, mobility solutions, social platforms based marketing and advertising options, etc.

As a CEO of a company in one of the above scenarios, you will get messages and hints from the business, the customers, the stakeholders, the employees and IT people complaining about various issues, like productivity, sales, advertising, connectivity, etc. You can go for a quick fix (happened a lot!) and ask the IT director to purchase new servers, storage and network appliances, which will show some improvements in short term, but you will be dead wrong.

The CEO should initiate the DCT Programme and become its major sponsor. In order to assess the magnitude of the business and DC transformation, you need an Enterprise Architect – EA) and a team of consultants: Business Architect, Applications Architect, Data Architect, Infrastructure Architect, Network Security Architect several Business Analysts and several SMEs in the software, operating systems, servers, storage, and network areas.                                     Very important, the Enterprise Architect should report to the Board of Directors and not to the IT Director because you need an independent, professional transformation design, unbiased by the IT.

Here is a possible sequence of the actions:

  • The business assessment process will involve in the first stage data/information gathering about the business by means of interviews with the stakeholders, business owners, managers, employees and also with a set of business (web based) questionaries which should be mandatory for everybody in the company. The answers to the interviews and questionaries will be analysed by the EA team and the results will give an overall, good picture of the present status of the business and also will allow the EA team to perform a gap analysis between present and future state of the business and develop a very high design (vision) of the future status of the business. Once the business vision is presented to the stakeholders and is approved it becomes the goal of the DCT Programme by means of transforming the IT (Data, Applications, Infrastructure) in order to align it to the business needs and achieve the business goal.
  • The Discovery process will allow a precise assessment of the Present Mode of Operation(PMO) of the applications, data and infrastructure employing a couple of procedures: interviews with the applications owners, support teams, end users, technical questionaries; data, applications and infrastructure audit via physical, applications and infrastructure specific audit software tools. The combination of a top-down (from applications to the hardware) and bottom-up (from hardware to the applications) audits and producing an applications interdependency matrix will fine tune the PMO status.
  • Based on the business vision of the future state, the EA team will produce a Future Mode of Operation (FMO) status for the applications, data and infrastructure. Another gap analysis will be performed between PMO and FMO of the above mentioned components and the results will constitute the fundamental of the High Level Design (HLD) document for the Transformation. The HLD document will present the future state of the applications, access and applications mobility, future state of data, data storage and data protection, future state of the infrastructure, if it is a hardware refresh, virtualization or migration to cloud. The HLD will present also the data replication, the backup and archiving, the network security and a mobility solutions. It is possible to add some high level financials at this stage. The HLD will be peer reviewed amongst the Solution Architects and presented for review and approval to the stakeholders and it will have several iterations and alterations until everybody will approve it. Be aware of the legacy applications owners, they are very protective of their applications and do not like any kind of change. It is quite possible that the approved HLD will have a few legacy applications “transformed” via the good old “lift and shift” method of the belonging servers!
  • A new artefact, the Detailed Design (DD) solution document will be created based on the approved HLD. As the name suggest, the DD will have all the technical details for the Transformation, all hardware new racks locations, physical and virtual servers, storage and network appliances IP addresses, applications connectivity ports,  data replications, backup and archiving details, DR fail-over details, inter-DC connectivity details, redundant multi-Telco internet connectivity details and many, many more.  Also, will have detailed financials. The DD document will be as well pee-reviewed by the Solution Architects and reviewed /approved by the stakeholders. After approval the DD becomes the official document for implementing the transformation.
  • The Implementation phase is what we were waiting for. Paradoxically, the input of the EA team at this stage is minimal. Did I mentioned that from the beginning of the Programme there was a Program Manager and a team of Project Managers (PM) assigned to the Transformation? Well, yes, they were working hard behind the scenes, getting resources for the whole Programme, getting approvals for the SMEs to perform the physical audits in the DCs, submitting and following the purchase orders for all the new equipment, making sure that the designated area in the DC has the proper racks, requested power, connectivity, Internet access, etc. The Solution Architects will assist the PMs with the conversion of the DD document into a detailed implementation plan, taking into account the business requirement for zero or Minimal Down Time (MDT) for the Production environment, not mentioning the approved Change window for Prod environment which is usually Saturday 11:00 pm to Sunday 5:00 am.

And you know the two words you do not want to hear on a Sunday at 1:00 am? Roll back!

37653288
37653288
  1. The above is a very high level presentation of a transformation scenario. If you want more details or need help with a similar project, contact me: ilie.mihut@dashinternational.com.au

 

The DATA CENTRE TRANSFORMATION (DCT) Story

google-datacenter-tech-13

If you follow the wise words of Master Yoda regarding the Sith Lord: “Always two there are, no more, no less. A master and an apprentice.” you will be all right with the number of data centres required for a proper Data Centre Transformation project.

The analogy is correct for “old style” data centres with a Master -Production data centre and an Apprentice – Disaster Recovery (DR) data centre.

Nowadays we find more and more the Master- Master (Active-Active) scenario where the Production data centre actively fails over to the DR data centre with little or no human interaction.

The future (and some companies are already there!) will present to the business a 100% business continuity solution (0% down time) where there is no difference between the two data centres. With ultra-fast fibre optic connectivity between the data centres, with massive virtualization and cloud integration, with multiple feeds from multiple power grids, all the applications will be always available running on a “Prod” server regardless that the server is up in data centre one or data centre two.

A DCT project is a never ending one and not because of the new technologies emerging every day, but because the businesses are continuously evolving, changing and so, the IT should transform and adapt accordingly.

The key principle for the IT is to listen to the business, to align the IT to the business needs and not the other way around. If, as the IT entity, you do so, every change, every struggle the business has should be converted into a transformation project, from replacing a few servers, to virtualization, to cloud, to a total transformation of the existing data centres or maybe migration to new data centres altogether.

 

dct2

Contrary to popular belief, the virtualization is not as widely implemented as it should. Many large or enterprise size companies have between 20% and 40% of their servers virtualized. The reason behind this is the presence of legacy applications and the complex challenge to virtualize those applications.

Even worse is the percentage of cloud based infrastructure and applications. The main concerns about moving to a cloud based IT solution are data security, lack of control (for shared public based clouds) and cost.  Evidently, a cloud based solution has a different model for the above mentioned concerns. What the CEO and main stockholders of any company should know besides concerns is that a cloud based solution presents the company with huge business opportunities, makes the business more agile and accelerate exponentially any change.

So, both virtualization and cloud are the two main components of the DCT for the near future.

A major role in understanding and accept the modern DCT projects is the transformation of the organizational culture. As part of it, the technical IT people, which are very technology focused need to be educated to think not about infrastructure, storage servers and network, but about the services they provide to their customers (which are ultimately the business). As part of the cultural change are the stakeholders and change management. Instead of implementing a DCT project and tell the business to use it, is much better to have a consultative approach, listen to the business needs and align technology and DCT to the business needs and challenges.

I leave you with the words of wisdom from Keith Duncan, head of data centre design and delivery at Telefonica O2 regarding DCT:

Be bold and confident. “We all live in a zero tolerance environment so pay close attention and be realistic in execution.”

Align data centre transformation to the business programme. “This is a big one to take away. Rather than taking the technological approach, tack more to the business programme and projects and dovetail with them.”

“Take the business with you, get stakeholder investment and sell it to the business.”

Cultural change is very important. “It is not just about technology. Transformation of culture of the organization is just as important as the introduction of new technology.”