Feb 21
Department of Homeland Security Iranian Advanced Persistent Threat Spreadsheet

iranian_apt_attack_matrix.zip

Attached is a spreadsheet developed by the Department of Homeland Security summarizing Advanced Persistent Threats.

Here's a link to our YouTube Channel which describes details about this spreadsheet https://youtu.be/Zms4fh4Zxc4

Dec 31
Ransomware Infects Synoptek’s Clients

Synoptek-gets-hit-with-ransomware.jpg


I received a call on 12/24/19 early Christmas Eve from a company that was hit by Ransomware.  It appears that the attack happened around 3:00 a.m. on 12/24/19.  This company was a client of the Managed Service Provider (MSP) Synoptek.  As you may have heard, hackers compromised Synoptek and used Synoptek's Remote Monitoring and Management (RMM) Tool to deliver Sodinokibi Ransomware to a subset of Synoptek's clients.  The Ransomware attack infected the company's servers and workstations and took their network down.  At the time, Synpoktek wasn't giving timely updates about how they were going to address the "Security Incident." The company reached out to us for suggestions on how to recover from the attack. 

We took a look at their Veeam backup infrastructure, and their backup files appeared to be Ok.  I suggested copying off the backup files to another storage device to see if they could restore from the backup files.  They were running ESXi Servers for their virtualization platform.  If you need an inexpensive ($1300) ESXi host for testing, check out https://www.adscon.com/sites/blog/Lists/Posts/Post.aspx?ID=58.  The news of the attack first hit Reddit https://www.reddit.com/r/sysadmin/comments/ef2egh/synoptek_issues/ on 12/24/19.  On this thread, someone mentioned that Synoptek paid the Ransom for an undisclosed amount.  When we heard of the Ransomware payment, we suggested that the company wait until they received the decryption keys before attempting any restore.  The decryption keys worked and the company was back up and running around 12/27/19.

Fortunately, the company was able to recover relatively gracefully from the attack.  In the future, here are some steps to protect against this type of attack.

  1. Two Factor Authentication.  We now recommend two-factor authentication for all clients.  Make sure both your MSP and your company uses two-factor authentication for all accounts.
  2. Firewall with Malware Scanning.  Use a firewall that includes malware/anti-virus/intrusion scanning.  The company that was infected did not have a firewall with this additional protection.
  3. Follow the 3-2-1 Backup Rule.  3 copies of your data, on 2 different media with at least 1 copy off-site and offline.  Companies may perform backups, but a lot of companies DO NOT have offline backups!  If you get infected with Ransomware and your only backups are online and encrypted, you will not be able to restore from those backup files.
  4. Create a Protected Management Network with Restricted Access.  All management of your ESXi/Hyper-V Servers should be performed from a separate protected Management Network.  Your backups should be run exclusively on this management network.  There should be a separate vLAN and Firewall rule that restricts access into this network.  Use a separate Active Directory Domain with different credentials and two-factor authentication to access the Management Network. 
  5. Hacking Recovery Plan.  Have a recovery plan in place before you are hit with an attack.  We feel that this plan should be part of your Disaster Recovery Plan.  You don't want to figure out how to recover during an actual attack. 

If you want help implementing any of these suggestions, want to protect yourself against the next Ransomware attack, or want to improve your company's Cybersecurity, please send an email to info@adscon.com or call us at (310)541-8584 x 100.  


Nov 13
The 3 Big Benefits of HCI Everyone is Talking About

3-hci-benefits.jpgHyper-converged infrastructure (HCI) is a software-centered architecture that combines compute, networking, storage, back up, and virtualization into an exclusive resource that primarily utilizes the x86 server hardware. It was introduced in 2014 to automate repetitive tasks, minimize the risk of human error, and increase performance at lower costs. Every HCI node in a cluster runs a hypervisor such as Microsoft Hyper-V, VMware ESXi, or Nutanix AHV, and the control feature on HCI functions as a distinct virtual machine on each node, creating a fully distributed fabric that can be scaled up with the addition of new nodes.



Numerous IT Functions Consolidated 

The centralized management solution offers numerous unique advantages that traditional data centers do not; high performance, proficient architecture, straightforward deployment, and secure management. For fault tolerance, at least two copies of hyper-converged storage are put on distinct nodes. Direct attached storage (DAS) on every node uses data virtualization to create a virtual SAN. According to Enterprise Strategy Group's survey, the HCI's ability to consolidate numerous IT functions like deduplication, backup, migration of VMs between different appliances or even data centers, and WAN optimization into a single platform are the most appealing reason to use hyper-converged infrastructure.



The technology enables the system to garner the Capex and Opex benefits, streamlining deployment processes, management, and scaling of IT resources. HCI has Evolved with the necessity to expedite scaling, remove infrastructure complexities, and boost overall performance. The integration of servers and storage with management from a hypervisor has brought much more feasibility and practicality. Recently a new class of hyper-convergence has appeared, called Tier 1 hyper-convergence or by another name, disaggregated HCI – where storage and servers are again physically detached entities and fused only at the software layer that makes them appear as one.

 

Some of the top-notch HCI vendors offer premier services like HPE Simplivity, Nutanix with its AHV hypervisor; VMWare's vSAN, Pivot3 with its top-to-bottom NVMe data path that enhances storage performance, HPE Simplivity, which offers HCI and data protection mashup with high levels of data reduction.

 

There's a lot of pressure on an organization's IT team to enhance their performance, efficiency, and flexibility – many executives end up typically unhappy with their IT department's ability to quickly introduce new technologies. 



 Hyper-convergence is at the core of enterprise or hybrid cloud management with methods to deliver on-premise IT service with the operational efficiency and speed of public cloud services like Microsoft Azure, Amazon Web Services (AWS) – bridging the gap between conventional infrastructure and public cloud services.

 

Here at ADS Consulting, we use an HCI infrastructure for power computing, storage, backup, and recovery requirements of our ecosystem. We see it as a paradigm shift in how companies build their IT infrastructures and data center management, with a lot of benefits and very few disadvantages. HCI systems are the grounds for Public, Hybrid, or Private cloud infrastructure. Here we have presented the top three benefits of HCI that are attracting more and more enterprises to the HCI realm.

 

HCI Equals Simplicity and Productivity

 

One of the favorite benefits of HCI is its simplicity in managing and overall productivity. While traditionally, an IT infrastructure had numerous components from different vendors, each managed by a separate team with meticulous employee training required. The diverse hardware with their distinct interfaces and functions historically have been complex to manage – expensive and time-consuming.

 

Hyper-converged infrastructure solutions with its virtualized workloads are managed with a single toolset that solved all the problems faced by organizations by consolidating all components into one and integrating them with existing work environments. HCI also assists in troubleshooting and monitoring, as most HCI solutions provide tools on a single dashboard.

 

Robust speed – HCI enables accessing, sharing, and sending data across multiple appliances faster.

Better analysis – all the enormous amounts of data generated by businesses have brought complexities in accessing, collecting, and swiftly analyzing the data. HCI streamlines access and data-driven decisions for organizations.

Remote management – HCI enables real-time remote management of your workload with its software layer.

Data protection – HCI provides swift snapshotting, data deduplication, data security, and makes disaster recovery more straightforward. It offers high resiliency with a distributed model – data being spread across multiple nodes throughout the data center, or between data centers in different geological locations.

Modernization – replaces old hardware at far less cost without negatively affecting your business operations.

 

 

Flexibility and Lower Costs of Hyperconvergence

 

Regardless of what platform or technology, often the impetus behind the selection of respective technology is cost. In the case of hyper-converged infrastructure, cost savings are in-fact the driving force for its rapid growth. Inherently, management complexities result in addition to cost incursion in terms of working-hours in hardware maintenance, administering the workflows, and multiple vendors to consult for support and issues resolution.

 

HCI provides higher levels of flexibility and scalability, as every HCI node is an independent unit with every type of hardware needed for the powerhouse performance a data center needs. With a single-vendor HCI solution, the entire process of implementation, customization, and scalability comes down to saved time and cost. All in all, HCI requires less equipment to buy, in both cases of new or upgrading, which means saving money, time, space and headaches.  

 

 

HCI Improves Agility

 

A hyper-converged infrastructure enables significant infrastructure and management savings over the traditional 3-tier – centralized storage, storage network, and compute. But the most significant opportunity for enterprises is to harness the improved business agility. Proved by an independent study funded by Nutanix – one of the top HCI vendors today, the study states that the HCI solutions were shown to offer notable infrastructure savings and increased productivity. But the single most-highest benefit was in business agility – productivity and responsiveness to changing business needs, accounting for 43 percent of the overall benefits.

 

The report further asserts that the increased agility results from three areas: less downtime, enhanced performance, and higher revenues.



  • Less downtime: IDC reported organizations averaged a 99.7 percent reduction in unplanned downtime and a 100 percent decrease in planned downtime – increases average user productivity due to reduced downtime.
  • Increased performance: The IDC report puts the average enhanced application performance on Nutanix at 50 percent, which is a significant leap.
  • Higher revenue: Improved performance and application scaling boosts employee productivity and generates higher revenue.

 

ADS Consulting Group brings decades of experience to the table in IT consulting, and we're here to help you. Regardless of what you may already understand about HCI and the different solutions available, it's at the very least worth a no-obligation consultation with someone on our leadership team who will be able to really understand what you need, then make the right recommendation based on performance requirements, storage requirements, budget and other factors. 

Call (310)-541-8584 today🔥



Nov 13
Top 3 Backup and Disaster Recovery Strategies

backup-disaster-recovery-team.jpg

With our lives currently revolving around tech gadgets and the internet, the era of printed copies has long gone and has been replaced with data and IT infrastructure. In such a scenario losing access to primary data both at the individual as well as at the enterprise level is a nightmare. It can halt our lives and jeopardize the existence of businesses. And this is exactly why data security today has become a hot topic and a necessity more than a choice.

Imagine that you run a company and because of an unseen disaster, which can be anything from a minor power outage that will require a while to recover from - to a cyber attack or a non-technological calamity like a flood, cyclone, fire or theft at your office premises, you lose access to the IT data. What will you do now? 

Downtime can mean a significant loss of business and revenue. It can lead to derailed customer interactions, a decrease in employee productivity and a standstill in business processes. And, this is why having backups and a disaster recovery strategy is a good idea. This way you might not be able to prevent disasters but at least you can ensure that if a disaster strikes then you can stand back on your feet with all the data and applications as soon as possible. 

Now before discussing anything further about the backup and data recovery strategies let's try to understand the difference between backup and disaster recovery.

The Difference between Backup and Recovery

What is Data Backup?

Backup means creating an extra copy or multiple copies of the data to restore it in case of accidental deletion, problems with software upgrades or data corruption.  Always follow the 3-2-1 Backup rule. 3 Copies of your data, on 2 different media, and 1 copy offline and offsite.

What is data recovery?

Data recovery refers to the plans and procedures of quickly re-establishing access to IT resources, data and applications after a disaster strikes. By establishing a solid protocol, a company can ensure data protection and rapid recovery.

Why is backup and data recovery important?

Suppose if you lose data because of an unseen calamity and it takes you hours to deal with the data loss, then, in that case, those hours will be the hours of idleness where your employees or your partners may lose productivity because the loss has rendered them unable to carry out any critical tasks because the data can't be accessed. In a big organization, these hours can be crucial and can pronounce a huge financial loss for a company. And in the case recovery took days, this can also lead to a permanent loss of customers – nobody wants that! Hence, data backup and recovery are of utmost importance to just about any organization.

Best data backup and recovery strategies

Before beginning with the disaster recovery solutions it is first important to understand the significance of prioritizing workloads. Many larger organizations have RTOs (recovery time objectives) and RPOs (recovery point objectives) that show the importance of each workload to their company. They classify the workloads as Tier1, Tier 2, Tier 3 and so on based upon their importance. So in case of a disaster or emergency the company first needs to up and run the Tier 1 workloads which are most critical to the business and the less important ones can be recovered later. Hence defining workloads can assist in providing a framework for disaster recovery.

Getting Your Disaster Recovery Planning In Place

Cloud-based backup and recovery processes

Cloud backups are becoming increasingly popular in both small and large organizations. This backup and recovery solution has gained popularity because of the multiple benefits that it provides. Many cloud solutions offer the infrastructure for data storage and some of them also provide the tools for managing backup and disaster recovery processes. Moreover, by opting for cloud computing you can avoid the large investments that will be otherwise required for additional infrastructure and for costs of managing the environment. Choosing cloud technology is also beneficial as it allows for the geographical distance that you need to keep the data safe in case a natural calamity hits your region and it also gives you rapid scalability.

With cloud storage, one can also opt for both on-premises and cloud-based backup solutions. Following this hybrid cloud approach would allow you to reap the benefit of duplication and geographical distance while still allowing you to keep your production environment operating smoothly.

ADS Consulting Group's cloud backup solution not only stays on the bleeding edge of cyber-security, but also provides the physical security of armed guards 24/7 in an airtight controlled climate that surpasses all current existing data center infrastructure management (DCIM) offerings.  If a disaster strikes we can restore your servers stored in our cloud backup solution to our ADS Cloud infrastructure to minimize your downtime.

On-premises

In the case you want to self-host your backup only on-premise, there are some great solutions available as well.  Local backups cover a multitude of recovery issues, and will not only provide compliance to strict data privacy policies but will also ensure business continuity.

However, this recovery strategy has its flaws, such as the case of a natural calamity that hits your region than both your primary and secondary data systems might be affected. Hence, it makes sense to store your data at two different geographical locations. This secondary data center can be located across town, or across the country depending upon how you choose to balance the other factors like performance, data recovery time, and physical accessibility to the secondary data center.

Once you have decided on the geographical location for data storage then you can choose the type of technology or process you want to use for disaster recovery.

The most effective backup and recovery plans protect data in both onsite and cloud-based backups. And it's not as hard as you might think to safeguard your business continuity in this way.

Disaster recovery as a service

A data disaster and recovery service can be based in the cloud and / or onsite. In any case, it's imperative that every organization makes this a priority, especially as their intellectual property grows virtually every day.

The leadership team at ADS Consulting Group brings decades of experience to the table and are here to help prevent your business from grinding to a halt in the case of a disaster. One easy call and help will be on the way! Get in touch and we'll give you a free strategy call which should be of high value whether we work together or not. "An ounce of prevention…" well, you know the rest. Let's chat!  Call us at (310)541-8584 x 100 or send us an email at info@adscon.com to start the conversation.

 



Oct 15
SimpliVity versus Nutanix – Which one should you use?

 

 

simplivity or nutanix compared

 

What is Hyperconverged Infrastructure (HCI)? 

Understanding the Differences - Simplivity vs. Nutanix - Broken Down

SimpliVity and Nutanix are vendors that offer Hyperconverged (HCI) solutions.  HCI consolidates the traditional memory, compute, networking, storage, backup, and disaster recovery into a single node with integrated management tools.  For fault tolerance, at least two copies of the hyperconverged storage reside on separate nodes.  Direct attached storage on each node uses data virtualization to create a "virtual" SAN.  Most HCI systems run on x86 systems.

 

We view hyperconvergence as a paradigm shift in the way companies build out their IT infrastructure and data centers.  Just like other paradigm shifts, there are too many benefits and very few disadvantages with this technology.   Listed below are some of the major hyperconverged vendors including Simplivity vs. Nutanix, and several others: 

 

  1. HPE SimpliVity.  Once and forever, global data deduplication.  Backup and restore any VM regardless of size in seconds.  Runs on HPE Hardware.

    simplivity hyperconverged hardware







  2. Nutanix.  Runs the Acropolis Hypervisor (AHV) or VMWare's vSphere ESXi Hypervisor.

    nutanix cluster
     







  3. Scale Computing.  Uses the HC3 Hypervisor as an alternative to VMware's vSphere ESXi Hypervisor.  Great integration of the Hypervisor and HCI environments.


    scale computing hc







  4. Pivot3.  Automated policy-based management that automates mixed application workloads.

    pivot3 hyperconverged






  5. VMware VSAN and EVO:Rail.  Data virtualization platform that supports different hardware vendors.  Direct attached storage is used to create a Virtual SAN.  This software defined vSAN allows you to customize your vSAN hardware configuration, while EVO:Rail comes with pre-configured hardware.  Although VMWare vSAN requires a specific hardware configuration, it's the closest HCI purely softwaredefined storage SAN.

More about Simplivity Hyperconverged 


 

 

HCI is an enterprise data and infrastructure solution.  We see HCI solutions replacing most traditional SAN installations during an IT refresh cycle.  We use an HCI infrastructure to power our compute, backup, and cloud storage requirements at ADS Cloud.  HCI systems are the foundation for a Public, Private or Hybrid Cloud infrastructure.  Here are some benefits of HCI:

 

Cost Savings 

 

The biggest advantage of hyperconvergence is cost savings.  Hyperconvergence eliminates the requirement of a dedicated Storage Area Network (SAN), storage administrator, and reduces management costs across the entire HCI stack.  HCI is not inexpensive, but it eliminates a significant amount of hardware.  Expect to save 20% to 30% on the initial cost of an HCI versus a non-HCI solution.  Your total cost of ownership will go down over the HCI lifetime because the storage management and other HCI management functions are accessed with a single management interface.  Your systems administrator costs should go down, because HCI is easier to manage than a Non-HCI environment.

 

Scalability

 

It's easy to scale compute, memory, and storage by adding additional nodes.  It's important to get the node configuration correct for your company, so you scale up your capacity in increments that are appropriate for your data center.

 


 

Simplified Management

 

Simple and easy to use interface to run workloads at the application level.   Because of the HCI single management interface, you'll save time managing your IT infrastructure. 

 

Predictability

 

Efficiently manage workflows and make it easy to allocate the appropriate amount of resources for a given workload.  Identify potential resource issues proactively and address them before they become constraints.

 

Consolidation

 

Because of HCI, you're running fewer pieces of hardware.  You save on hardware, rack space, cooling costs, and power requirements.  Some organizations were able to consolidate from ten racks to three racks using an HCI solution.

 

Storage and protection 

 

Many HCI solutions offer duplication of files and simplify disaster recovery and failover.  At least two copies of your data are stored on different nodes to protect your data and provide fault tolerance.  If one node goes down, your servers stay up and running.

 

Data Locality

 

Most HCI systems will optimize for data locality.  VMs that run on a given host will have a local copy of their data on the same host.  This reduces the amount of bandwidth consumed on the network.  Having a VMs' files on the same host where the VM is running allows the VM to access storage at local bus speeds, not network speeds.  This improves the overall performance and reduces network congestion.

 

The Right Question to Ask

 

As we come down to choosing the better one of the hyper-convergence infrastructures for your IT management strategy – which features that are most important to your company?  The question isn't "Simplivity vs. Nutanix versus Scale vs. Pivot3, " as all provide a powerhouse solution,  and are potent contenders.   The right question to ask is:  "Which hyperconverged infrastructure solution is the best fit for my company, based on my IT infrastructure requirements?"  

 

Let's discuss the differences and similarities between the two most popular hyperconverged infrastructures out there SimpliVity and Nutanix. If you are considering a move to HCI, which one should you go for? Let's dive in. 

 

Nutanix 

 

Nutanix is credited with coining the term "hyper-converged infrastructure," and today is the largest vendor. Founded in 2009, Nutanix refers to itself as a cloud computing company – providing software-based, highly scalable, cloud-enabled, federated hyperconverged infrastructure. Nutanix delivers both software and hardware services via an original equipment manufacturer (OEM) with HCI appliances. 

 

Primary services by Nutanix include a hypervisor, security, software-defined networking, security, and an HCI management tool. The Nutanix Acropolis is a turnkey HCI system with Nutanix software, a hypervisor, server virtualization, storage management, and software-based storage, virtual networking, and cross-hypervisor application mobility.  You can run the Nutanix AHV hypervisor on Nutanix, Lenovo, Dell, Cisco, HPE or IBM hardware.  Nutanix also supports hypervisors from VMware(ESXi), Citrix (XenServer/Citrix Hypervisor), and Microsoft (Hyper-V).  Nutanix Acropolis is made up of three core components:

  1. Prism.  The Management Plane that provides unified management across the virtualization environment.  It optimizes the virtualization environment, provides infrastructure management and handles daily operations.
  2. Acropolis.  The foundation of the HCI platform.  Acropolis includes virtualization, storage, network virtualization, and cross hypervisor application mobility.
  3. Calm.  Virtualizes application management and decouples it from the underlying infrastructure.  Easily deploy application workloads in private or public clouds.

 

Important Features: 

 

  • Nutanix supports several hypervisors, not just VMware, and also offers its own AHV hypervisor. 
  • Can perform selective deduplication 
  • Enterprise-grade backups 
  • Flash storage and flexible block size 
  • In-built storage software providing user-friendly file recovery options 
  • Powerful compression giving up to a 4x increase in storage size 
  • When working with clones, snapshots, or failed drive rebuilds, the performance impact is almost zero.
  • Built-in cloud proficiency 
  • Non-disruptive, automatic upgrade of Nutanix OS, hypervisor and firmware 

 

Pros 

 

High availability 

Replicates on remote and on-site physical nodes and supports a protection domain.  Virtual Machines (VMs) in a protection domain are backed up locally and replicated to one or more remote sites.

 

Simplified management 

Straightforward deployment, configuration, and training. Prism offers an intuitive interface to centrally manage several clusters from a single web portal. 

 

Performance 

Nutanix doesn't use traditional RAID but uses a "Replication Factor" to store data blocks across multiple servers.  This Replication Factor improves I/O performance.  Deduplication and native optimization improves data storage, with linear performance increases with expanding clusters. 

 

Support 

Nutanix has maintained an excellent support reputation. 

 

Cons 

 

Costly 

Though Nutanix software licenses are cost-effective, it doesn't compete on low price. It's AHV hypervisor is free to use on its HCI systems.  Nutanix is typically higher priced than similarly configured competitors. 

 

Nutanix Storage

 

For a non-Nutanix ESXi node to access Nutanix storage, it must be a member of the same Nutanix cluster. 

 

Hewlett Packard Enterprise SimpliVity 


Hewlett Packard Enterprise (HPE) acquired SimpliVity Hyperconverged Systems in 2017 due to its strong HCI portfolio. HPE SimpliVity focuses on hyper-convergence, availability, and scalability with competitive pricing.  SimpliVity supports VMWare(ESXi), Microsoft (Hyper-V) hypervisors and is compatible with the Citrix Rady HCI Workspace Appliance Program.  It does not have a native hypervisor.  SimpliVity offers global disk deduplication, high availability, native backup, storage management, and replication.

 

The OmniStack data management software by Hewlett Packard Enterprise provides an outstanding, highly efficient data protection system. The OmniCube hyper-converged infrastructure combines server compute, storage, network, and virtualization in a highly scalable building block with integrated global management via a vCenter Plugin.  The OmniCube's Data Virtualization Platform offers exceptional data efficiency.  A hardware accelerator card installed in each node deduplicates, optimizes and compresses data.  Before any data are written to SimpliVity storage, the hardware card searches to see that pattern already exists anywhere in the SimpliVity Storage Federation.  Usually, the pattern already exists and creates a pointer to that block rather than writing the same data pattern in a different location.  SimpliVity Guarantees 10:1 storage savings, but be aware backups stored on SimpliVity storage are used in this calculation.  Expect to get approximately 1.5:1 compression when you migrate your VMs from traditional SAN storage.  When taking into account backups stored on SimpliVity, the deduplication ratio can range from 20:1 to 80:1.  The global deduplication significantly enhances the data reduction capabilities of SimpliVity.

 

HPE SimpliVity has integrated protection – backup policies on the VM level determine the virtual machine copies, retention time, and storage destination. You create and assign a backup policy to a datastore.  Any VM that resides on the datastore inherits the same backup policy. SimpliVity claims you don't need any additional backups.  However, we feel that this is wrong.  In our opinion it's extremely important to get backups stored on secondary storage and ultimately off line and off site.  We use a 3rd party backup (Veeam) for all of our SimpliVity installations.   We recommend that you follow the 3-2-1 backup rule.  3 copies of your data, on 2 different media with 1 copy offline and off-site.  Regardless of the HCI solution following the 3-2-1 backup rule should always be a part of any company's backup and recovery strategy. 

 

HPE SimpliVity hyper-converged has the 2600 and 380 models, which are available in different configurations.  SimpliVity RapidDR accelerates and automates off-site disaster recovery and integrates with Hewlett Packard Enterprise tools like Composable Fabric. 

 

Important features: 

 

  • Global once and forever disk deduplication.
  • Works with an HPE OmniStack PCIe accelerator card 
  • Supports a two-node cluster configuration.  The minimum number of Nutanix nodes is three. 
  • Guaranteed 10:1 storage gains.
  • Clone a VM in 15 seconds. 
  • Backup and restore a VM in 15 seconds regardless of its size.
  • As little as 15-minute replication intervals to your DR site.  Depending on the speed of your replication line, replication can complete in under six minutes.
  • Innate, end-to-end data protection that cuts bandwidth and adds virtual machine recovery points 
  • Integrated backup solution 
  • Inexpensive all flash-storage solution 
  • Access more virtual machines with less hardware 
  • Single point of support contact 

 

Pros 

 

Data protection 

Backup and restore any VM regardless of size in under 15 seconds.  Replicate to a DR Site in under six minutes.

 

Management Ease

HCI management tasks are streamlined using centralized management. 

 

Scalability 

Linear capacity and performance enhancements with on-site and remote addition of nodes. 

 

Global Once and Forever Disk Deduplication

The OmniStack Accelerator significantly reduces Input/Output Operations per Second (IOPS), by eliminating duplicate writes of identical disk patterns across the entire SimpliVity Storage Federation. 

 

HP tools integrated 

HPE tools like OneSphere and OneView integrate with SimpliVity. 

 

SimpliVity vs. Nutanix - Nodes: Storage is available to Non-SimpliVity Nodes

 

Non-SimpliVity Servers can access SimpliVity storage and take advantage of backup, deduplication, and replication.  Non-SimpliVity servers "see" the SimpliVity storage as a virtual Network File System (NFS) volume.  Non-SimpliVity Nodes do NOT have to be a member of the SimpliVity cluster to access SimpliVity storage.

 

Cons 

 

HPE Hardware Only

You must use HPE hardware.  You must purchase a SimpliVity node in a specific configuration from HPE.  You cannot order a generic HPE DL380 server and turn it into a SimpliVity Node.

 

No native HCI hypervisor 

No native HCI hypervisor like Nutanix offers. Hewlett Packard Enterprise's partners Microsoft, VMware and Citrix. 

 

Which solution is better for you?  It depends on the computing requirements of your company.  Ask: "What are you trying to do with your HCI infrastructure?" and the correct solution for you will be obvious. 

 

Get in touch with us at ADS Consulting Group, and we'll help narrow your decision through a specific checklist in a quick interview. Call us at (310) 541-8584 for a quick, no-strings chat!

 

 

Oct 15
What is HPE SimpliVity? 3 Reasons You Should Check it Out

speed-protection-efficiency-simplivity-hpe.jpg 

 

HPE SimpliVity

Hewlett Packard Enterprises SimpliVity is an integrated, all-embracing system that has compute, storage, networking, and other data center services, all in a hyperconverged stack. It is the ideal solution for organizations to refresh the key components of their on-premises, public cloud, and hybrid cloud infrastructure as it provides smooth VM management, automation, application performance, and much more.

 

Reasons you should check out HPE SimpliVity

 

HPE SimpliVity hyperconverged infrastructure (HCI) delivers an enterprise-level performance, protection, and resiliency that businesses require within the cloud that many have come to expect. To make it as much of a no-brainer as possible, besides the robust capabilities - they off­er one of the most thorough guarantees in the industry including data eff­iciency, protection, simplicity, management, and availability.

1.    Hyper Efficiency and Storage Management

SimpliVity comes with VM-centric backup capabilities, allowing users to achieve 90% storage capacity savings across storage and backup together, relative to other traditional solutions (think multiple hard drives, storage servers etc…).

SimpliVity uses a purpose-built OmniStack Accelerator card to globally deduplicate, compress, and optimize all data at the start globally, eliminating unnecessary data processing thus enhancing application performance.

SimpliVity helps you in managing a massive amount of data, as it delivers a whopping 40:1 storage efficiency.  Your results may be higher or lower depending on your data retention policy, how well your data can be deduplicated, and how well your data can be compressed.  Because of the global deduplication, SimpliVity significantly reduces the number of IOPS (input/output operations per second) that are needed. One user said, "a 10-terabyte server could easily push 1,000 IOPS by itself".

2.    Data Protection and Quick Backup

HPE SimpliVity HCI and its built-in backup capability takes under a minute in order to run a local backup or restoration of a 1 TB VM.  A restore from native SimpliVity backup storage is usually your best option for a fast granular restore of a VM.

In addition to the SimpliVity backup, we still recommend purchasing 3rd party backup software like Veeam so you can follow the 3-2-1 Backup rule.  3 copies of your data, on 2 different storage media, with 1 copy off line and off site.  If you get hit with a massive crypto infection, there is a change that all of your online backups may get encrypted as well.  If all of your backups are stored online, you may be unable to restore.  Another reason to keep your backups off line is hackers.  If hackers gain access to your system, they can:

  1. Steal your data.
  2. Delete all of your online backups.
  3. Delete all of your production servers.
  4. Put you out of business.

This already happened to a company called VFEMail.  Regardless of your HCI infrastructure, always follow the 3-2-1 backup rule.

3. Enhanced Data Mobility and Speed

When you are using HPE SimpliVity, user interface (UI) integrates with VMware vCenter – full integration with the hypervisor, and in just three clicks you can restore, back up, clone or move a VM, all from one console.

While using the SimpliVity UI in a central location, it takes an average of less than a minute to create or update backup policies for 1000s of VMs over dozens of sites (you read that right).

Without any downtime for local or remote sites, and without disrupting existing SimpliVity backups, you can easily add or replace SimpliVity HCI systems. And you'll never have the need to reconfigure any SimpliVity and backup policies for existing sites, both remote and local, nor will you need to re-enter any IP addresses for a remote site.

Data flowing between multiple locations is complex and can't be managed easily. With SimpliVity, the connection between offsite locations and branches becomes a breeze. SimpliVity thus makes the use of a wide area network (WAN) way more efficient by replicating only unique blocks, eliminating the need to have WAN optimization tech in place.

 

Conclusion

The HPE SimpliVity 380 uses an abstraction layer on top of the underlying hardware resource to write the deduplicated and compressed data on the disk to create time-based copies of the data. It also helps in moving data and VMs from one system to another either locally or in the cloud. HPE SimpliVity 380 software can be easily integrated with VMware vCenter where the whole data center infrastructure can be managed using a single console.

SimpliVity, in addition to the features above, also provides centralized management that makes it easy to scale storage across several locations. Without the usual hassle of the configuration of IP addresses and controllers, you can move data from on-premises to the public cloud or hybrid cloud infrastructures.

Simply put, it's the "future of storage." In a world where digital asset catalogues are continually getting larger and larger, and the access of that data becomes more and more complicated, a hyper-converged infrastructure is exactly what a company that has a growing index of digital assets needs in order to maximize storage, efficiency and accessibility.

Get in touch with us at ADS Consulting Group, and we'll help narrow your decision (there are a few HCIs on the market) through a specific checklist in a quick interview. Call us at (310) 541-8584 x 100 for a quick, no-strings chat, or send us an email at info@adscon.com. 

 

Sep 23
Your LTO 8 Tapes are finally available and make it easier to follow Backup Best Practices

LTO8 Tape.jpg

As you know Sony and Fujifilm (the last two LTO 8 manufacturers) were involved in a lawsuit that prevented the sale of LTO 8 tapes.  Ironically you could purchase an LTO 8 tape drive, but not the LTO 8 tapes.  LTO 8 (native 12TB) double the capacity of LTO 7 (native 6TB) tapes.  A quick check of some online vendors now show the tapes in stock.  Here's a link https://www.backupworks.com/HPE-LTO-8-Tape-Media-Q2078A.aspx.  Tape is still one of the best ways to store your backups offline and offsite.  Make sure to follow the 3-2-1 backup rule:  three copies of your data, on two different media, and one copy offline AND offsite.  Data replicated to a Disaster Recovery site usually doesn't count because your data are still online.  If you purchased an LTO 8 drive, now you can finally utilize the full capacity of the tape.

Sep 13
Protect your SQL Server Data in a Virtualized Environment

SQL Server.jpg

VeeamBackupandReplication.jpg

SQL Server has specific requirements to ensure you get a good backup and more importantly are able to restore from backup.  Although this Blog post is targeted to backup SQL Server in a virtualized environment, most of the concepts can be applied to all SQL Servers – physical or virtual.  Many of the SQL Server backup issues center around the Authentication Mode (Windows or Mixed) and Recovery Model (Simple, Bulk-logged or Full).  An explanation of these terms are listed below:

  1. Authentication Mode.  This is selected during the installation of SQL Server.  There are two modes:
    1. Windows Authentication (Recommended).  This mode only allows authentication from Windows Active Directory.  Because you can place much tighter security controls on Active Directory Users, this is the preferred configuration.  Of course, you'll need to have Active Directory implemented in your environment with at least two Domain Controllers (DCs) on your network for fault tolerance.
    2. Mixed Mode.  This allows both Windows Active Directory Users and SQL Server Users.  In the SQL Server Management Studio, you can create specific SQL Server Users that will be allowed to authenticate to the SQL Server.  Some applications require Mixed Mode authentication, but use Windows Authentication whenever possible.
  2. Recovery Model.  There are three recovery models in SQL Server. 
    1. Full (Recommended).  All changes to the SQL Server are recorded in the Transaction Log File.  This file is usually truncated after a successful Full Backup of SQL Server to keep the size manageable.  Using a transaction log backup, you can restore a SQL Server Backup from the previous day and then use the transaction log to roll forward the SQL Server to a desired point in time.
    2. Bulk Logged.  Performs minimal logging, but does allow for a point in time restore.  However, you can recover to the end of any backup.
    3. Simple.  No logging is performed.  Changes since the last backup are unprotected.  This is no point in time restore.

For the purposes of this Blog Post we will focus on SQL Server in Windows Authentication Mode and SQL Server databases with the full recovery model.  When backing up any SQL Server, make sure to properly quiesce the SQL Server during the backup.  Quiescing (called Application-aware processing in Veeam) temporary "stuns" the SQL Server and stops the transaction flow.  This ensures that either ALL or NONE of a transaction is included in the backup.  If you do not quiesce the SQL Server, you risk getting a SQL Server Backup with partial transactions.  This is referred to as a "dirty" SQL Server backup because it has partial transactions in the backup.  If you have to restore from a "dirty" backup you'll have to remove any partial transactions before it can be used.

For any backup for any server, make sure to follow the 3-2-1 rule for backups:  3 copies of your backup, on 2 separate media (typically disk and tape) with at least 1 copy off site and off line.  Off line backups are vital in today's environment because they give you Ransomware protection.  If all of your backups are stored online, get hit with ransomware, and your backup files are encrypted you will not have anyway to recover.  

Consider performing hourly log backups for critical SQL Server databases.  These log backup files should be stored on a separate server in case the SQL Server crashes during the middle of the day.  Assuming you backup your SQL Server nightly and the next day your SQL Server crashes.  You can perform a restore from the night before and use the log files to recover the SQL Server to within an hour of when it crashed. 

We highly recommend using Veeam Backup Enterprise and Replication to backup all virtual servers.  Listed below are the typical steps we use to backup a SQL Server in a virtualized environment.

1.       Create a dedicated backup account in Active Directory that will be used to backup VMs.  DO NOT use the Administrator account!  For more information refer to https://helpcenter.veeam.com/docs/backup/vsphere/required_permissions.html?ver=95u4.

2.       For SQL Server this account should have the following permissions:

    1. SQL Server instance-level roles: dbcreator and public
    2. Database-level roles: db_backupoperator, db_denydatareader, public
    3. For System Databases:
      1. master - db_backupoperator, db_datareader, public;
      2. msdb - db_backupoperator, db_datawriter, db_datareader, public
    1. Securables: view any definition, view server state
    2. For truncation of SQL Server 2012 or SQL Server 2014 database transaction logs, this account should have the db_backupoperator database role (minimal required) or the sysadmin server role.

3.       Adequate Backup storage.  We suggest having at least three times the amount of disk space in production on disk so you can retain a reasonable amount of backup history on disk – approximately three weeks.

4.       Backup Job Selection.  For VMware, we suggest backing up at the vCenter (Recommended), Datacenter or Cluster Level.  That any new VMs that are added at a lower level should get automatically included in the backup. 

5.       Veeam Daily SQL Server Backups. 

    1. We suggest performing a full backup of SQL Server on the weekend and incremental backups during the week with Veeam.  It used to be that incremental backup was something to avoid.  However, with Veeam incremental backups are a good thing.  If you need to perform a restore it will take the last full backup along with any necessary incremental backups and present the server to you as of the last incremental backup.  There's no need to worry about if a file was included in an incremental backup.
    2. We suggest performing synthetic full backups on the weekend.  A synthetic full takes the last full backup along with the incremental backups and consolidates the files into a new synthetic full. 
    3. We suggest performing an active/real full backup at least once a month in case the synthetic full backups get corrupted.

6.       Application-Aware Processing.  This will ensure that your SQL Server is quiesced when a snapshot is taken for the Veeam backup.  For any SQL Server we suggest the following settings in Veeam:

    1. VSS:  Require success.
    2. Transaction Logs:  SQL:  Truncate Logs.
    3. Exclusions:  Disabled.
    4. Scripts: No.

7.       Store backups to disk.  Theses backups should be stored on disk.

8.       Secondary Target.  Create a Tape Job that will copy the backup from disk to tape after the disk backup is completed. 

9.       Secondary tape backup.  At least once a week, we suggest performing a secondary tape backup to a different Veeam Media Pool that is taken off site.

10.   Hourly SQL Server database Log backups to a different server.  If the SQL Server is critical, we suggest creating a Maintenance Plan that backs up critical databases to a different server during business hours.  For more information on log backups refer to https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/transaction-log-backups-sql-server?view=sql-server-2017.

11.   Perform a test restore.  After you have your backup strategy, perform a test restore of the SQL Server.  You can restore at the following level with the Enterprise Version of Veeam:

    1. Entire Server.
    2. Disks on the Server.
    3. Files on the server.
    4. SQL Server Databases.

Do NOT skip this step!  You don't want to find out that you can't restore when you really need it.  Make sure you can perform a proper SQL Server restore without any issues before you have to perform a "real" SQL Server production database restore.

SQL Server backups can be intimidating to an IT professional.  Hopefully this Blog Post can help demystify SQL Server backups for your company.

Aug 07
How to Fix Office x64 File Extension Associations on your computer

MSOffice.jpg

When you install Office 2016 64 bit and later, file extensions and default programs and default applications may not be correctly associated with Office programs.  To fix this issue:

  1. Complete the Office installation.
  2. Close all Office Applications.
  3. Click on Start and type Control Panel.
  4. Make sure View by: is set to Small Icons.
  5. Click on Program and Features.
  6. Locate your Office installation, right click Change.
  7. Select Quick Repair, Repair.
  8. Set Default Apps as necessary.
    1. For email.  Right click on Start, Settings, Default Apps. Click on Email and select Outlook.
  9. Verify file extensions are properly associated with Office Programs.
    1. Scroll down and select Choose default apps by file type.  Review Office files extensions (doc, docx, xls, xlsx etc.) and verify they are associated with the correct Office Application.  If they are not, click on the application to the right of the file extension and change the application to the desired Office application.

We ran into this issue with installing Office 2019 x64 on Windows 10.  A little irritating trying to figure out why we couldn't open files with Office 2019.  Hopefully this tip will save you some frustration. 

Jul 15
Are CPUs assigned to your SQL Server Going to Waste?

SQL Server and CPUs.jpg

Allocating CPUs to a Virtual Machine (VM) running SQL Server Standard on a VMware host should be pretty easy right?  Yes and No.  There are a few constraints when making CPU allocation changes to the VM, otherwise SQL Server may not be able to take advantage of the additional computing power.  SQL Server Enterprise is limited by the Operating System Maximum and is not subject to these constraints.  Refer to the table below for compute maximums of SQL Server Standard:

SQL Server Standard VersionMaximum Compute Capacity
2014Limited to the lessor of 4 sockets and 16 cores
2016Limited to the lessor of 4 sockets and 24 cores
2017

Limited to the lessor of 4 sockets and 24 cores

When you change the number of CPUs allocated to a VM on VMWare you must make sure not to exceed these maximums.  When you edit the properties of the VM, determine the total number of CPUs you want to allocate to the VM.  After you allocate the number of CPUs then enter a value for the cores per socket – this will determine the total number of CPU sockets that the VM will see.  By default, VMware allocates one core per socket, but you change this value.  Consider the example below:

SQL Server 4 Sockets with 4 Cores.jpg

In this example a total of 16 CPUs was allocated to the VM with 4 cores for each CPU.  The number of sockets that the VM will see is 4 (16 CPUs/4 cores).  Here's another example:

SQL Server 1 Socket with 16 Cores.jpg

In this example a total of 16 CPUs was allocated to the VM with 16 cores for each CPU.  The number of sockets that the VM will see is 1 (16 CPUs/16 cores).  Here are a few general rules when allocating CPUs to a VM running SQL Server Standard.

  1. Do not exceed 4 sockets.
  2. Do not exceed 24 cores for SQL Server 2016/2017 or 16 cores for SQL Server 2014.
  3. Having fewer sockets with more cores tends to work better with SQL Server.
  4. Minimize the number of different VM CPU configurations on a VMWare host.  Having a single host with many different VM CPU configurations makes the scheduler work much harder and will slow down performance of all VMs running on the host.
  5. Never exceed the number of physical cores on the SQL Server Host.
  6. Never exceed the amount of physical RAM on the host.
  7. Non-Uniform Memory Access (NUMA).  Starting with vSphere 6.5 vNUMA access is automatically optimized.  However, you must consider the total number of cores running on each socket on the host and the amount of memory installed on the host.  If you have a two-socket host with 20 cores (40 cores total) and 1TB of memory (512GB of RAM per CPU) you should verify that your VM is optimized for the environment.  In this example you can have a VM that has up to 20 Cores and 512GB of RAM configured with a single socket.  If you exceed 512GB of RAM on the VM and the VM is configured with a single socket, then the vNUMA configuration will be non-optimal because it will have to access memory from the other CPU.   In this example if you need to allocate more than 512 GB of RAM to the VM, then configure the VM with two sockets that the load will be spread over both CPUs.  Of course, in this example you're still "limited" to 1TB of RAM and 40 cores to the VM.
  8. If you're licensing SQL Server Standard by Core make sure you have enough licenses to cover the CPUs allocated to the VM.  Each SQL Server Core license entitles you to two cores.

CPU allocation with SQL Server Standard can be very tricky when you're attempting to scale a VM over 4 sockets.  In order for SQL Server Standard to take advantage of the compute power, make sure you don't exceed 4 sockets and a total of 24 cores.

1 - 10Next
switchservers.jpg

Servers at Switch Las Vegas 

Home of ADS Cloud