SPLA ( Service Provider Licensing Agreement) is a monthly licensing. It is designed for organizations like Marco that host Microsoft software for their customers. The SPLA program is the only Microsoft Volume Licensing program that provides third-party license use rights. It is designed specifically for service providers who want to offer one-stop, full-service solutions to their customers. Other Volume Licensing programs are for internal use only.
Advantages of SPLA program:
1. No up-front costs.
2. Pay only for licenses you need each month.
3. No long-term commitments.
4. Gives Marco customers access to Microsoft products such as Office, Server, SQL, and Exchange.
5. Access the most current product versions.
6. No worry about keeping compliant on Microsoft licensing.
7. Down grade rights to prior versions of any Microsoft application.
8. Allows Marco test and evaluate products internally before offering them to our customers.
9. Marco can install Microsoft products on devices we own that are located on your customer’s premises.
Note: SPLA does NOT cover licensing for any Microsoft product install on customer owned equipment.
Do feel that life is too short to:
1. To wait until the USB device is “safe to remove”?
2. To wait for the Microsoft updates to be installed and your system re-booted?
3. To accept the anti-virus updates?
4. To wait for you anti-virus to scan your system?
I was talking to a customer a while back and he proudly announced that he has never delete an email. I thought to myself that this is not something to be proud about. It is like saying I have never change oil in my car. A car needs maintenance and so does your Outlook email client. A few minutes of time a week can help keep your email client running smooth.
It is fairly common to keep unneeded mail in both the Inbox and Send folders. In case you didn’t realize, there is a copy of every E-mail sent kept in the Sent Items folder. Often it is not necessary to keep years’ worth of these. I recommend that you go to each folder, scroll to the bottom and delete old e-mail you no longer need. You can use windows keyboard short cuts to help in this. If you click on an email you want to delete and scroll down and hold the shift key and click on an email item, you will select all items in that range. Then if you hit the delete key, you can delete the block of emails all at once.
Also, don’t forget to look in personal folders you have created and delete emails no longer needed there.
When you delete email it is moved into the Deleted Items folder, it normally stays there unless you empty it. To empty the Delete Items, go to that folder, select all the deleted items you wish to get rid of, and press the Delete key.
If you would like to see how much space you are using in Outlook, it most versions you can click on File options it top ribbon. Then in the window displayed next to the mailbox cleanup icon, you will see a graph displaying how much storage you are using.
Some people recommend that you archive email to a PST file. That is something that is helpful for those who do not like to delete emails. But, I would be cautious. Here is an article that explains the downside to using PST files to archive email.
Another alternative is to create a folder in windows and move the email you want to archive into it. Here is a good article that explains it. This is very easy and cost effective way to archive email.
Alternative to PST files – Simple way to store email
Manually archiving email either to a PST or to a file folder is not an effective solution for many organizations because you must rely on the users to routinely archive their email. And, it is easy for a user to accidently delete important data.
If you want to be more diligent or you have regulations that you need to comply with, I suggest you look at finding an archival product to hand this. There are lots of products on the market. They can be categorized as either on premise or hosted solutions.
On premise solutions are the most common. An example of an on premise solution that works well for a small organization is GFI Archiver. It is a software based product that is install on an on-site server. It costs $35/user for software and $10/yr./user for maintenance.
GFI email archiving
The advantages of a solution like this is that it is fully automated, so no user involvement is needed in the archive process. This particular product archives emails, calendar items, and files. It has good security and excellent search capability. The disadvantage is the upfront cost and that you would possibly need to add a server onsite.
Hosted archiving solutions, sometimes called cloud archiving or Saas (software as a service), are becoming more common. They are easy to implement and maintain. An example, is a product that I am familiar with, Securance Email Archiver. It is an add-on to Securance SPAM filtering application. I believe it cost less than $10/user per month. This is a really great solution for small organization.
The 10 Disaster Planning Essentials for a Small Business Network
A disaster can happen at any time on any day. It’s also likely to occur at the most inconvenient time.
If you aren’t already prepared, you run the risk of having the disaster coming before you have in place a plan to handle it.
With summer coming up, it’s the perfect time to step back and implement these 10 disaster planning essentials. Make sure that in the event of a disaster, your company can get back up and running in no time.
As simple as it may sound, just thinking through what needs to happen if your server has a meltdown or a natural disaster wipes out your office will go a long way in getting it back. Your plan should contain details on what disaster could happen and a step-by-step process of what to do, who should do it and how. It should also include contact information for various providers and username and password information for various key web sites.
Writing this plan will also allow you to think about what you need to budget for backup, maintenance and disaster recovery. If you can’t afford to have your network down for more than a few hours, then you need a plan that can get you back up and running within that time frame. You may want the ability to virtualize your server, allowing the office to run off of the virtualized server while the real server is repaired. If you can afford to be down for a couple of days, there are cheaper solutions. Once written, print out a copy and store it in a fireproof safe, an offsite copy (at your home) and a copy with your IT consultant.
2. Hire a trusted professional to help you.
Trying to recover your data after a disaster without professional help is business suicide; one misstep during the recovery process can result in forever losing your data or result in weeks of downtime. Make sure you work with someone who has experience in both setting up business contingency plans (so you have a good framework from which you CAN restore your network) and experience in data recovery.
3. Have a communications plan.
If something should happen where employees couldn’t access your office, e-mail or use the phones, how should they communicate with you? Make sure your plan includes this information, including multiple communication methods.
4. Automate your backups.
If backing up your data depends on a human being doing something, it’s flawed. The number one cause of data loss is human error (people not swapping out tapes properly, someone not setting up the backup to run properly, etc.). Always automate your backups so they run like clockwork.
5. Have an offsite backup of your data.
Always always maintain a recent copy of your data off site, on a different server, or on a storage device. Onsite backups are good, but they won’t help you if they get stolen, flooded, burned or hacked along with your server.
6. Have remote access and management of your network.
Not only will this allow you and your staff to keep working if you can’t go into your office, but you’ll love the convenience it offers. Plus, your IT staff or an IT consultant should be able to access your network remotely in the event of an emergency or for routine maintenance. Make sure they can.
7. Image your server.
Having a copy of your data offsite is good, but keep in mind that all that information has to be restored someplace to be of any use. If you don’t have all the software disks and licenses, it could take days to reinstate your applications (like Microsoft Office, your database, accounting software, etc.) even though your data may be readily available.
Imaging your server is similar to making an exact replica; that replica can then be directly copied to another server saving an enormous amount of time and money in getting your network back. Best of all, you don’t have to worry about losing your preferences, configurations or favorites. To find out more about this type of backup, ask your IT professional.
8. Create network documentation
Network documentation is simply a blueprint of the software, data, systems and hardware you have in your company’s network. Your IT manager or IT consultant should put this together for you. This will make the job of restoring your network faster, easier AND cheaper. It also speeds up the process of everyday repairs on your network since the technicians don’t have to spend time figuring out where things are located and how they are configured. And finally, should disaster strike, you have documentation for insurance claims of exactly what you lost. Again, have your IT professional document this and keep a printed copy with your disaster recovery plan.
9. Maintain your system.
One of the most important ways to avoid disaster is by maintaining the security of your network. While fires, floods, theft and natural disasters are certainly a threat, you are much more likely to experience downtime and data loss due to a virus, worm or hacker attack. That’s why it’s critical to keep your network patched, secure and up-to-date. Additionally, monitor hardware for deterioration and software for corruption. This is another overlooked threat that can wipe you out. Make sure you replace or repair aging software or hardware to avoid this problem.
10. Test, test, test!
A study conducted in October 2007 by Forrester Research and the Disaster Recovery Journal found that 50 percent of companies test their disaster recovery plan just once a year, while 14 percent never test. If you are going to go through the trouble of setting up a plan, then at least hire an IT pro to run a test once a month to make sure your backups are working and your system is secure. After all, the worst time to test your parachute is AFTER you’ve jumped out of the plane.
If this sounds overwhelming, you are not alone. That is why many businesses are moving to hosted desktops or cloud computing.
The challenges that small businesses deal with never end — and for the small number of employees who have to take on these tasks, it can quickly get overwhelming. No wonder, then, that many small businesses have all but ignored the important task of developing a disaster recovery plan, which involves understanding the risks of the disasters that small businesses face, figuring out how best to prevent against the deleterious effects of these disasters, and implementing a business continuity solution to minimize downtime.
Importantly, the disasters that cause small organizations the most damage are the ones that many business owners may not consider to be all that common, such as hardware failure and power outages. This blog post aims to illuminate five common disasters that small businesses face, so that business owners have a sense of perspective when considering the importance of a disaster recovery strategy. You would probably guess the most common disasters are caused by floods, tornadoes, and other major storms. You will be surprise to learn that common causes are much smaller problems that have huge impact on businesses.
1. Hardware failure
One of the most disruptive disasters that can strike a small business at any time is hardware failure. Whether it is a clicking hard drive in an email server or a fried motherboard inside a central file server, any kind of hardware failure can result in the inability to access critical data. Possibly the worst aspect of hardware failure is that it is inevitable, yet completely unpredictable. In fact, a recent survey of nearly 400 partners by data protection firm StorageCraft revealed that 99% of them had experienced a hardware failure, with 80.9% of those failures attributable to hard drive malfunctions.1 Failed hardware leads to downtime and lost productivity, both of which can cost small businesses dearly.
2. Software corruption
Permanent corruption of server data, such as corruption of the server’s operating system or damage to line-of-business applications that run on the server, could lead to significant downtime. Even the most sophisticated storage apparatuses are not immune to software corruption: a study by CERN, the world’s largest particle physics lab, revealed software corruption in 1 out of every 1,500 files.2 Software corruption could severely disrupt small businesses that do not have a backup and disaster recovery solution in place.
Viruses, worms, Trojans — any and all forms of malware can wreak serious havoc on small businesses. According to the National Small Business Association’s Year-End 2014 report, 1 out of every 2 small businesses reported being the victim of a cyber-attack, with the average cost of each cyber-attack exceeding $20,000.3 The consequences stemming from cyber-attacks – such as data theft, data corruption, and permanent data deletion — can seriously affect businesses and their customers. Though deploying a firewall and security software is an important first step, having a fallback continuity strategy in place in case cyber-attacks get through to a company’s systems is crucial.
4. Power outages
Blackouts, power shortages, and other power-related issues are not as uncommon as many businesses think. In fact, a 2014 survey by power management firm Eaton Electrical revealed that 37% of IT professionals had dealt with “unplanned downtime due to power-related issues in the last 24 months,” with 32% of outages lasting longer than four hours.4 Even more concerning are the high costs of downtime; according to a May 2013 survey by research firm Aberdeen Group, the average cost of downtime for small companies was a whopping $8,581 per hour.5 Electrical issues are real — and they are costly.
5. Natural or site-wide disasters
Natural disasters, such as include tornadoes, earthquakes, and hurricanes, can cripple small businesses. Even more threatening are fires, floods, and other common catastrophes that can occur regardless of a particular geographic location’s propensity toward certain natural disasters. Since these disasters and catastrophes almost always lead to site-wide damage, small businesses with only one or two locations are especially vulnerable. No amount of money spent can prevent site-wide and natural disasters from occurring; the only recourse for businesses affected by these calamities is to get back up and running as soon as possible after they happen.
The aforementioned disasters that could befall a small business are relatively consistent across different organizations and industries. Understanding these disasters is just the first step; the next, and more important, task is for every small business to figure out how best to guard itself against these threats.
Adopting business continuity services is essential for every small business looking to protect their data and quickly recover from disasters. Business continuity services ensure that all of a business’s digital data is securely backed up off-site and recoverable whenever necessary. If you would like to learn more about our business continuity services please contact me at larry.phelps at marconet.com
1 “Which Hardware Fails the Most and Why.” Web log post. StorageCraft Recovery Zone. StorageCraft, 2015. Web. 30 June 2015.
2 Panzer-Steindel, Bernd. Data Integrity. Tech. CERN, 8 Apr. 2007. Web. 20 June 2015.
3 2014 Year-End Economic Report. Rep. National Small Business Association, Feb. 2015. Web. 15 June 2015.
4 How ‘Software-Defined’ Is Redefining the Modern Data Center. White Paper. Eaton Corporation, Oct. 2014. Web. 19 June 2015.
5 Business Continuity and Disaster Recovery: Don’t Go It Alone. Analyst Insight. Aberdeen Group, June 2013. Web. 10 June 2015.
source – used by permission of efile inc.
There so many potential problems that could cause IT downtime your business, it makes financial sense for you to understand how much outages could cost them. Many business owners don’t realize it, but the average small business loses more than $55,000 in revenue due to IT failures each year.1 But these costs are unique to every business.
Knowing specifically how much downtime will cost an organization is critical for understanding what kind of investment in backup and disaster recovery makes sense for a business. Having a solid ballpark number allows these organizations to use cold, hard facts to weigh their economic tolerance for how much data and downtime they can afford to suffer, and to compare it against the investment they’ll choose to make in backup and disaster recovery systems.
Causes of Downtime
Before delving into costs, it helps to understand what can cause downtime within a typical small business . Most downtime events fall into two categories: everyday disasters and catastrophic site- wide disasters.
Everyday disasters usually account for 95% to 98% of downtime events that SMBs encounter.2 As common as these incidents may be, these disasters are far from mundane, so don’t let the everyday designation offer you a false sense of security. Sometimes, something as simple as a server crash could cause six hours of downtime for an email system—so while something may be an everyday disaster, that doesn’t mean it isn’t costly.
These kinds of issues can manifest themselves in a lot of different ways. For example, hardware issues such as fried motherboards, hard drive failures and bad fans and power supplies can all knock out systems for some time. These are typically the most common sources of downtime, accounting for about 55% of resiliency issues within SMBs. Further exacerbating these issues is the fact that even when these systems are covered by warranties, that may not be a guarantee that the manufacturer can actually get a replacement shipped and installed in a timely manner.
1 “IT Downtime Costs $26.5 Billion In Lost Revenue,” InformationWeek, May 24, 2011
2 “Most SMB Downtime Caused by Hardware Failures,” Midsize Insider, Feb. 21, 2013
Issues such as software or database corruptions or deleted items can also pose hazards. Similarly, connectivity problems from misconfigured networking gear, interruption of Internet access and fiber cuts can also cause meaningful outages. And, finally, lack of redundancy in systems such as firewalls, switches, Wi-Fi components, routers and servers can all contribute to downtime.
In many cases, these problems are triggered by a user error of some sort. User errors are the top causes of downtime for SMBs, causing about a quarter of incidents.
Meanwhile, site-wide disasters happen less frequently — but when they do occur, they have the potential to be ruinous for an SMB that’s dependent on its IT resources. These are the types of events that most people immediately associate with the word “disaster” — catastrophic incidents like fires or floods, or natural disasters such as tornados, hurricanes and earthquakes. When these disasters occur, their effects are rarely isolated to certain systems or servers.
The ultimate lesson is that it is almost inevitable that an SMB will at some point or another face some form of downtime. The question is, how much will these events hit their bottom line? And what kind of investment in business continuity makes sense to offset these potential losses? In order to answer these questions, organizations need to understand how much downtime will cost them when it affects certain systems and hits the organization site-wide. This can then be used to weigh against the likelihood of the downtime and the cost of the preventative disaster recovery measures needed to offset the potential costs.
Understanding what is the Cost of Downtime
Downtime tends to cost organizations most when it hits mission- critical systems or other systems that employees need to do their daily work. So the basic utilities like Internet access, phones and email will all obviously take a toll on the business when they’re down. But even when these utilities are up, businesses feel a financial impact when line-of-business applications, cloud applications, or any other systems needed to book revenue or perform services go down.
Those dollars-and-cents consequences tend to be felt both as tangible hard costs and less-quantifiable soft costs. Hard costs include things like lost revenue and customer churn. Soft costs include damage to brand reputation and customer satisfaction due to service-quality degradation.
Calculating the Cost of Downtime
Obviously, soft costs can be extremely tricky to calculate. So in order to come to a reliable estimate of your cost of downtime, it makes sense to focus primarily on hard costs.
One simple but effective calculation to be made is the following: (Revenue/workdays per year) / open work hours
As you make the calculation, be sure to factor in whether downtime would be complete or isolated based on concentration of offices or workplace. So, say you had a healthy midsized company that was pulling in $20 million per year. The company is open an average of 23 days per month, with about 12 operating hours per day. And about 50% of the firm’s mission-critical employees work at company headquarters.
To understand the cost of downtime for critical systems at company HQ, you’d start with that simple calculation:
($20 million/276 workdays)/12 hours per day = $6,000 lost in revenue per hour of downtime company-wide
Then you’d account for site specificity:
($6,000 per hour lost company-wide)*.50 = $3,000 per hour of downtime at corporate headquarters
While soft costs are more difficult to calculate, businesses should still keep these in mind when weighing the risks — when communicating these numbers to decision makers, it helps to verbally explain that these are minimum baseline costs.
Once a downtime hard cost has been estimated, organizations can start to think about their tolerance for downtime or outages. The basic gist of these tolerances is to understand just how much financial impact the organization can absorb without too much business disruption.
This includes recovery point objectives — how much data loss can you tolerate? And recovery time objective — how much downtime can you afford?
Some businesses in innovative industries may have a very low tolerance for recovery point objectives, lest the loss of something like engineering blueprints set back projects months or years. And other businesses in service industries might have low tolerance for recovery time objectives due to demanding customers requiring 24/7 care. It all depends on the business.
Once those tolerances have been set, that should drive your disaster recovery program. Each component of a disaster recovery program should be designed to ensure that any one disaster event will never yield downtime or data loss that is above those tolerance levels.
These components include:
• Redundant infrastructure and connectivity.
• Backup and disaster recovery systems.
• Services and processes to rapidly recover key systems to mitigate the cost of downtime.
This may be overwhelming or too costly for you to consider. Cloud Computing is an excellent solution for many organizations like yours. Please contact me if you with to look into how Cloud Computing can help you solve these potential problems.
You probably have thought that most IT departments are not interested in looking at moving their IT to The Cloud.
A recent survey by Deloitte found that a minority of the IT directors they surveyed felt that way. In fact only 37% of IT executives preferred to keep their infrastructure on-premises. The remaining 63% favored cloud solutions.
In a recent announcement, Netfix, is move 100% of their IT operations to the cloud. This move shows us that cloud computing can be a great solution for any size company. In fact in a recent study done by BetterCloud, 12% of companies have move all of their IT to the cloud. And it is projected that 20% of all large companies will be 100% cloud in the next 5 to 7 years. This is a quite a change because many people thought that only small and mid-sized companies would move to total cloud solutions.
In George Orwell’s novel Nineteen Eighty-Four, every citizen is under constant surveillance by the authorities. Big Brother is the fictional character who is said to be always watching you.
With the onset of Internet of Things, are we about to enter into the plot of 1984? Here is a short video describing IoT.
This technology could certainly be used to keep you in constant surveillance but it also could do a lot of good for you.
If you lay awake worrying about IT, you are not alone. A recent survey by Hitachi Data Systems found that there are five things that most IT executives are worried about; loss of customer data, loss of revenue, breach of customer privacy, unexpected extra costs, and failure to deliver expected ROI.
These are some of the very reasons that many organizations are migrating to the Cloud.