Archive for the ‘Tech’ Category


Effective IT security habits of highly secure companies

by admin ·

You’re far more vulnerable to hackers than you think. Here are the secrets to staying secure

When you get paid to assess computer security practices, you get a lot of visibility into what does and doesn’t work across the corporate spectrum. I’ve been fortunate enough to do exactly that as a security consultant for more than 20 years, analyzing anywhere between 20 to 50 companies of varying sizes each year. If there’s a single conclusion I can draw from that experience, it’s that successful security strategies are not about tools — it’s about teams.

With very good people in the right places, supportive management, and well-executed protective processes, you have the makings of a very secure company, regardless of the tools you use. Companies that have an understanding of the importance and value of computer security as a crucial part of the business, not merely as a necessary evil, are those least likely to suffer catastrophic breaches. Every company thinks they have this culture; few do.

The following is a collection of common practices and strategies of the most highly secure companies I have had the opportunity to work with over the years. Consider it the secret sauce of keeping your company’s crown jewels secure.
Focus on the right threats

The average company is facing a truly unprecedented, historic challenge against a myriad of threats. We are threatened by malware, human adversaries, corporate hackers, hacktivists, governments (foreign and domestic), even trusted insiders. We can be hacked over copper wire, using energy waves, radio waves, even light.

Because of this, there are literally thousands of things we are told we need to do well to be “truly secure.” We are asked to install hundreds of patches each year to operating systems, applications, hardware, firmware, computers, tablets, mobile devices, and phones — yet we can still be hacked and have our most valuable data locked up and held for ransom.

Great companies realize that most security threats are noise that doesn’t matter. They understand that at any given time a few basic threats make up most of their risk, so they focus on those threats. Take the time to identify your company’s top threats, rank those threats, and concentrate the bulk of your efforts on the threats at the top of the list. It’s that simple.

Most companies don’t do this. Instead, they juggle dozens to hundreds of security projects continuously, with most languishing unfinished or fulfilled only against the most minor of threats.
INSIDER: Traditional anti-virus is dead: Long live the new and improved AV

Think about it. Have you ever been hacked using a vector that involved SNMP or an unpatched server management interface card? Have you even read of such an attack in the real world? Then why are you asking me to include them as top priorities in my audit reports (as I was by a customer)? Meanwhile, your environment is compromised on a near daily basis via other, much more common exploits.

To successfully mitigate risk, ascertain which risks need your focus now and which can be left for later.
Know what you have

Sometimes the least sexy stuff helps you win. In computer security, this means establishing an accurate inventory of your organization’s systems, software, data, and devices. Most companies have little clue as to what is really running in their environments. How can you even begin to secure what you don’t know?

Ask yourself how well your team understands all the programs and processes that are running when company PCs first start up. In a world where every additional program presents another attack surface for hackers, is all that stuff needed? How many copies of which programs do you have in your environment and what versions are they? How many mission-critical programs form the backbone of your company, and what dependencies do they have?

The best companies have strict control over what runs where. You cannot begin that process without an extensive, accurate map of your current IT inventory.
Remove, then secure

An unneeded program is an unneeded risk. The most secure companies pore over their IT inventory, removing what they don’t need, then reduce the risk of what’s left.

I recently consulted for a company that had more than 80,000 unpatched Java installations, spread over five versions. The staff never knew it had so much Java. Domain controllers, servers, workstations — it was everywhere. As far as anyone knew, exactly one mission-critical program required Java, and that ran on only a few dozen application servers.

They queried personnel and immediately reduced their Java footprint to a few hundred computers and three versions, fully patching them across most machines. The few dozen that could not be patched became the real work. They contacted vendors to find out why Java versions could not be updated, changed vendors in a few cases, and implemented offsetting risk mitigations where unpatched Java had to remain.

Imagine the difference in risk profile and overall work effort.

This applies not only to every bit of software and hardware, but to data as well. Eliminate unneeded data first, then secure the rest. Intentional deletion is the strongest data security strategy. Make every new data collector define how long their data needs to be kept. Put an expiration date on it. When the time comes, check with the owner to see whether it can be deleted. Then secure the rest.
Run the latest versions

The best security shops stay up on the latest versions of hardware and software. Yes, every big corporation has old hardware and software hanging around, but most of their inventory is composed of the latest versions or the latest previous version (called N-1 in the industry).

This goes not only for hardware and OSes, but for applications and tool sets as well. Procurement costs include not only purchase price and maintenance but future updated versions. The owners of those assets are responsible for keeping them updated.

You might think, “Why update for update’s sake?” But that’s old, insecure thinking. The latest software and hardware comes with the latest security features built-in, often turned on by default. The biggest threat to the last version was most likely fixed for the current version, leaving older versions that much juicier for hackers looking to make use of known exploits.
Patch at speed

It’s advice so common as to seem cliché: Patch all critical vulnerabilities within a week of the vendor’s patch release. Yet most companies have thousands of unpatched critical vulnerabilities. Still, they’ll tell you they have patching under control.

If your company takes longer than a week to patch, it’s at increased risk of compromise — not only because you’ve left the door open, but because your most secure competitors will have already locked theirs.

Officially, you should test patches before applying, but testing is hard and wastes time. To be truly secure, apply your patches and apply them quickly. If you need to, wait a few days to see whether any glitches are reported. But after a short wait, apply, apply, apply.

Critics may claim that applying patches “too fast” will lead to operational issues. Yet, the most successfully secure companies tell me they don’t see a lot of issues due to patching. Many say they’ve never had a downtime event due to a patch in their institutional memory.
Educate, educate, educate

Education is paramount. Unfortunately, most companies view user education as a great place to cut costs, or if they educate, their training is woefully out of date, filled with scenarios that no longer apply or are focused on rare attacks.

Good user education focuses on the threats the company is currently facing or is most likely to face. Education is led by professionals, or even better, it involves co-workers themselves. One of the most effective videos I’ve seen warned of social engineering attempts by highlighting how some of the most popular and well-liked employees had been tricked. By sharing real-life stories of their fallibility, these co-workers were able to train others in the steps and techniques to prevent becoming a victim. Such a move makes fellow employees less reluctant to report their own potential mistakes.

Security staff also needs up-to-date security training. Each member, each year. Either bring the training to them or allow your staff to attend external training and conferences. This means training not only on the stuff you buy but on the most current threats and techniques as well.
Keep configurations consistent

The most secure organizations have consistent configurations with little deviation between computers of the same role. Most hackers are more persistent than smart. They simply probe and probe, looking for that one hole in thousands of servers that you forgot to fix.

Here, consistency is your friend. Do the same thing, the same way, every time. Make sure the installed software is the same. Don’t have 10 ways to connect to the server. If an app or a program is installed, make sure the same version and configuration is installed on every other server of the same class. You want the comparison inspections of your computers to bore the reviewer.

None of this is possible without configuration baselines and rigorous change and configuration control. Admins and users should be taught that nothing gets installed or reconfigured without prior documented approval. But beware frustrating your colleagues with full change committees that meet only once a month. That’s corporate paralysis. Find the right mix of control and flexibility, but make sure any change, once ratified, is consistent across computers. And punish those who don’t respect consistency.

Remember, we’re talking baselines, not comprehensive configurations. In fact, you’ll probably get 99 percent of the value out of a dozen or two recommendations. Figure out the settings you really need and forget the rest. But be consistent.
Practice least-privilege access control religiously

“Least privilege” is a security maxim. Yet you’ll be hard-pressed to find companies that implement it everywhere they can.

Least privilege involves giving the bare minimum permissions to those who need them to do an essential task. Most security domains and access control lists are full of overly open permissions and very little auditing. The access control lists grow to the point of being meaningless, and no one wants to talk about it because it’s become part of the company culture.

Take Active Directory forest trusts. Most companies have them, and they can be set either to selective authentication or full authentication trust. Almost every trust I’ve audited in the past 10 years (thousands) have been full authentication. And when I recommend selective authentication for all trusts, all I hear back is whining about how hard they are to implement: “But then I have to touch each object and tell the system explicitly who can access it!” Yes, that’s the point. That’s least privilege.

Access controls, firewalls, trusts — the most secure companies always deploy least-privilege permissions everywhere. The best have automated processes that ask the resource’s owner to reverify permissions and access on a periodic basis. The owner gets an email stating the resource’s name and who has what access, then is asked to confirm current settings. If the owner fails to respond to follow-up emails, the resource is deleted or moved elsewhere with its previous permissions and access control lists removed.

Every object in your environment — network, VLAN, VM, computer, file, folder — should be treated the same way: least privilege with aggressive auditing.
Get as near to zero as you can

To do their worst, the bad guys seek control of high-privileged admin accounts. Once they have control over a root, domain, or enterprise admin account, it’s game over. Most companies are bad at keeping hackers away from these credentials. In response, highly secure companies are going “zero admin” by doing away with these accounts. After all, if your own admin team doesn’t have super accounts or doesn’t use them very often, they are far less likely to be stolen or are easier to detect and stop when they are.

Here, the art of credential hygiene is key. This means using the least amount of permanent superadmin accounts as possible, with a goal of getting to zero or as near to zero as you can. Permanent superadmin accounts should be highly tracked, audited, and confined to a few predefined areas. And you should not use widely available super accounts, especially as service accounts.

But what if someone needs a super credential? Try using delegation instead. This allows you to give only enough permissions to the specific objects that person needs to access. In the real world, very few admins require complete access to all objects. That’s insanity, but it’s how most companies work. Instead, grant rights to modify one object, one attribute, or at most a smaller subset of objects.

This “just enough” approach should be married with “just in time” access, with elevated access limited to a single task or a set period of time. Add in location constraints (for example, domain admins can only be on domain controllers) and you have very strong control indeed.

Note: It doesn’t always take a superadmin account to be all powerful. For example, in Windows, having a single privilege — like Debug, Act as part of the operating system, or Backup — is enough for a skilled attacker to be very dangerous. Treat elevated privileges like elevated accounts wherever possible.

story of the day 6
Delegation — just in time, just enough in just the right places — can also help you smoke out the baddies, as they won’t likely know this policy. If you see a superaccount move around the network or use its privileges in the wrong place, your security team will be all over it.
Institute role-based configurations

Least privilege applies to humans and computers as well, and this means all objects in your environment should have configurations for the role they perform. In a perfect world, they would have access to a particular task only when performing it, and not otherwise.

First, you should survey the various tasks necessary in each application, gather commonly performed tasks into as few job roles as possible, then assign those roles as necessary to user accounts. This will result in every user account and person being assigned only the permissions necessary to perform their allowed tasks.

Role-based access control (RBAC) should be applied to each computer, with every computer with the same role being held to the same security configuration. Without specialized software it’s difficult to practice application-bound RBAC. Operating system and network RBAC-based tasks are easier to accomplish using existing OS tools, but even those can be made easier by using third-party RBAC admin tools.

In the future, all access control will be RBAC. That makes sense because RBAC is the embodiment of least privilege and zero admin. The most highly secure companies are already practicing it where they can.
Separate, separate, separate

Good security domain hygiene is another essential. A security domain is a (logical) separation in which one or more security credentials can access objects within the domain. Theoretically, the same security credential cannot be used to access two security domains without prior agreement or an access control change. A firewall, for example, is the simplest security domain. People on one side cannot easily get to the other side, except via protocols, ports, and so on determined by predefined rules. Most websites are security domains, as are most corporate networks, although they may, and should, contain multiple security domains.

Each security domain should have its own namespace, access control, permissions, privileges, roles, and so on, and these should work only in that namespace. Determining how many security domains you should have can be tricky. Here, the idea of least privilege should be your guide, but having every computer be its own security domain can be a management nightmare. The key is to ask yourself how much damage you can live with if access control falls, allowing an intruder to have total access over a given area. If you don’t want to fall because of some other person’s mistake, consider making your own security domain.

If communication between security domains is necessary (like forest trusts), give the least privilege access possible between domains. “Foreign” accounts should have little to no access to anything beyond the few applications, and role-based tasks within those applications, they need. Everything else in the security domain should be inaccessible.
Emphasize smart monitoring practices and timely response

The vast majority of hacking is actually captured on event logs that no one looks at until after the fact, if ever. The most secure companies monitor aggressively and pervasively for specific anomalies, setting up alerts and responding to them.

The last part is important. Good monitoring environments don’t generate too many alerts. In most environments, event logging, when enabled, generates hundreds of thousands to billions of events a day. Not every event is an alert, but an improperly defined environment will generate hundreds to thousands of potential alerts — so many that they end up becoming noise everyone ignores. Some of the biggest hacks of the past few years involved alerts that were ignored. That’s the sign of a poorly designed monitoring environment.

The most secure companies create a comparison matrix of all the logging sources they have and what they alert on. They compare this matrix to their threat list, matching tasks of each threat that can be detected by current logs or configurations. Then they tweak their event logging to close as many gaps as possible.

More important, when an alert is generated, they respond. When I am told a team monitors a particular threat (such as password guessing), I try to set off an alert at a later date to see if the alert is generated and anyone responds. Most of the time they don’t. Secure companies have people jumping out of their seats when they get an alert, inquiring to others about what is going on.
Practice accountability and ownership from the get-go

Every object and application should have an owner (or group of owners) who controls its use and is accountable for its existence.

Most objects at your typical company have no owners, and IT can’t point to the person who originally asked for the resource, let alone know if it is still needed. In fact, at most companies, the number of groups that have been created is greater than the number of active user accounts. In other words, IT could assign each individual his or her own personal, custom group and the company would have fewer groups to manage than they currently have.

But then, no one knows whether any given group can be removed. They live in fear of deleting any group. After all, what if that group is needed for a critical action and deleting it inadvertently brings down a mission-dependent feature?

Another common example is when, after a successful breach, a company needs to reset all the passwords in the environment. However, you can’t do this willy-nilly because some are service accounts attached to applications and require the password to be changed both inside the application and for the service account, if it can be changed at all.

But then no one knows if any given application is in use, if it requires a service account, or if the password can be changed because ownership and accountability weren’t established at the outset, and there’s no one to ask. In the end, this means the application is left alone because you’re far more likely to get fired for causing a critical operational interruption than you are letting a hacker stay around.

Prioritize quick decisions

Most companies are stunted by analysis paralysis. A lack of consistency, accountability, and ownership renders everyone afraid to make a change. And the ability to move quickly is essential when it comes to IT security.

The most secure companies establish a strong balance between control and the ability to make quick decisions, which they promote as part of the culture. I’ve even seen specialized, hand-selected project managers put on long-running projects simply to polish off the project. These special PMs were given moderate budgetary controls, the ability to document changes after the fact, and leeway to make mistakes along the way.

That last part is key when it comes to moving quickly. In security, I’m a huge fan of the “make a decision, any decision, we’ll apologize later if we need to” approach.

Contrast that with your typical company, where most problems are deliberated to death, leaving them unresolved when the security consultants who recommended a fix are called in to come back next year.
Have fun

Camaraderie can’t be overlooked. You’d be surprised by how many companies think that doing things right means a lack of freedom — and fun. For them, hatred from co-workers must be a sign that a security pro is doing good work. Nothing could be further from the truth. When you have an efficient security shop, you don’t get saddled with the stresses of constantly having to rebuild computers and servers. You don’t get stressed wondering when the next successful computer hack comes. You don’t worry as much because you know you have the situation under control.

I’m not saying that working at the most secure companies is a breeze. But in general, they seem to be having more fun and liking each other more than at other companies.
Get to it

The above common traits of highly secure companies may seem commonsense, even long-standing in some places, like fast patching and secure configurations. But don’t be complacent about your knowledge of sound security practices. The difference between companies that are successful at securing the corporate crown jewels and those that suffer breaches is the result of two main traits: concentrating on the right elements, and instilling a pervasive culture of doing the right things, not talking about them. The secret sauce is all here in this article. It’s now up to you to roll up your sleeves and execute.

Good luck and fight the good fight!


Click here to view complete Q&A of 70-465 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-465 Training at


10 boot camps to kick start your data science career

by admin ·

Data science is one of the fastest growing careers today and there aren’t enough employees to meet the demand. As a result, boot camps are cropping up to help get workers up to speed quickly on the latest data skills.

Data Scientist is the best job in America, according to data from Glassdoor, which found that the role has a significant amount of job openings and that data scientists earn an average salary of more than $116,000. According to its data, the job of data scientist rated a 4.1 out of 5 for career opportunity and it earned a 4.7 for job satisfaction. But, as the role of data scientist grows in demand, traditional schools aren’t churning out qualified candidates fast enough to fill the open positions. There’s also no clear path for those who have been in the tech industry for years and want to take advantage lucrative job opportunity. Enter the boot camp, a trend that has quickly grown in popularity as a way to train workers for in-demand tech skills. Here are 10 data science boot camps designed to help you brush up on your data skills, with courses for anyone from beginners to experienced data scientists.

Bit Bootcamp

Located in New Jersey, Bit Bootcamp offers both part-time and full-time courses in data analytics that last four weeks. It has a rolling start date and courses cost between $1,500 – $6,500, according to data from Course Report. It’s a great option for students who already have a background in SQL, as well as object-oriented programming skills such as Java, C# or C++. Attendees can expect to work on real problems they might face in the workplace, whether it’s at a startup or a large corporation. The course completes with a Hadoop certification exam using the skills learned over the past four weeks.
Price: $1500 – $6500

NYC Data Science Academy
The NYC Data Science Academy offers 12-week courses in data science that offer a combination of “intensive lectures and real world project work,” according to Course Report. It’s aimed at more experienced data scientists, who have a masters or Ph.D. degree. Courses include training in R, Python, Hadoop, Github and SQL with a focus on real-world application. Participants will walk away with a portfolio of five projects to show to potential employers as well as a Capstone Project that spans the last two weeks of the course. The NYC Data Science Academy also helps students garner interest from recruiters and hiring managers through partnerships with businesses. In the last week of the course, students will participate in mock interviews and job search prep; many will also have the opportunity to interview with hiring tech companies in the New York and Tri-State area.
Price: $16,000

The Data Incubator
The Data Incubator is another program aimed at more experienced tech workers who have a masters or Ph.D., but it’s unique in that it offers fellowships, which means students who qualify can attend for free. Fellowships, which must be completed in person, are available in New York City, Washington D.C. and the Bay Area. The program also offers students mentorship directly from hiring companies, including LinkedIn, Microsoft and The New York Times, all while they work on building a portfolio to showcase their skills. The boot camp programs run for eight weeks and students need to have a background in engineering and science skills. Attendees can expect to leave this program with data skills that will be applicable in real world companies.
Price: Free for those accepted

Galvanize has six campuses located in Seattle; San Francisco, Denver, Fort Collins, Boulder, Colo.; Austin, Texas; and London. The focus of Galvanize is to develop entrepreneurs through a diverse community of students who include the likes of programmers, data scientists and Web developers. Galvanize boasts a 94 percent placement rate for its data science program since 2014 and students can apply for partial scholarships of up to $10,500. According to Galvanize, students have gone on to work for companies such as Twitter, Facebook, Air BnB, Tesla and Accenture. This boot camp is intended to combine real life skills with education so that graduates walk away ready to start a new career or advance at their current company through formal courses, workshops and events.
Price: $16,000

The Data Science Dojo
With campuses in Seattle, Silicon Valley, Barcelona, Toronto, Washington and Paris, the Data Science Dojo brings quick and affordable data science education to professionals around the world. It’s one of the shortest programs on this list — lasting only five days — and it covers data science and data engineering. Before you even attend the program, you will get access to online courses and tutorials to learn the basics of data science. Then, you’ll start the in-person program which consists of 10 hour days over the course of five days. Finally, after the boot camp is complete, you’ll be invited to exclusive events, tutorials and networking groups that will help you continue your education. Due to the short nature of the course, it’s tailored to those already in the industry who want to learn more about data science or brush up on the latest skills. However, unlike some of the other courses on this list, you don’t need a master’s degree Ph.D. to enroll, it’s aimed at anyone at any skill level who simply wants to throw themselves in the trenches of data science and become part of a global network of companies and students who have attended the same program.
Price: Free for those accepted

Metis has campuses in New York and San Francisco, where students can attend intensive in-person data science workshops. Programs take 12 weeks to complete and include on-site instruction, career coaching and job placement support to help students make the best of their newly acquired skills. Similar to other boot camps, Metis’ programs are project-based and focus on real-world skills that graduates can take with them to a career in data science. Those who complete the program can expect to walk away with in-depth knowledge of modern big data tools, access to an extensive network of professionals in the industry and ongoing career support.
Price: $14,000

Data Science for Social Good
This Chicago-based boot camp has specific goals; it focuses on churning out data scientists who want to work in fields such as education, health and energy to help make a difference in the world. Data Science for Social Good offers a three-month long fellowship program offered through the University of Chicago, and it allows students to work closely with both professors and professionals in the industry. Attendees are put into small teams alongside full-time mentors who help them through the course of the fellowship to develop projects and solve problems facing specific industries. The program lasts 14 weeks and students complete 12 projects in partnership with nonprofits and government agencies to help tackle problems currently facing those industries.
Price: Free for those accepted

Offered through Northeastern University, Level is a two-month program that aims to turn you into a hirable data analyst. Each day of the course focuses on a real-world problem that a business will face and students develop projects to solve these issues. Students can expect to learn more about SQL, R, Excel, Tableau and PowerPoint and walk away with experience in preparing data, regression analysis, business intelligence, visualization and storytelling. You can choose between a full-time eight week course that meets five days a week, eight hours a day and a hybrid 20-week program that meets online and in-person one night a week.
Price: $7,995

Microsoft Research Data Science Summer School
The Microsoft Research Data Science Summer School — or DS3 — runs for eight weeks during the summer. It’s an intensive program that is intended for upper level undergraduates or graduating seniors to help grow diversity in the data science industry. Attendees get a $5,000 stipend as well as a laptop that they keep at the end of the program. Classes accommodate only eight people, however, so the process is selective, but it’s only open to students who already reside or can make their own accommodations in the New York City area.
Price: Free for those accepted

Silicon Valley Data Academy
The Silicon Valley Data Academy, or SVDA, hosts eight-week training programs in enterprise-level data science skills. Those who already have an extensive background in data science or engineering can apply to be a fellow and have the tuition waived. You can expect to learn more about data visualization, data mining, statistics, machine learning, natural language processing as well as tools such as Hadoop, Spark, Hive, Kafka and NoSQL. Programs consist of more traditional curriculums including homework, but it also includes guest lectures, field trips to headquarters of collaborating companies and projects that offer real world experience.
Price: Free for those accepted


Click here to view complete Q&A of 70-414 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-414 Training at


Microsoft to cut some Azure computing prices

by admin ·

The cloud pricing race to the bottom continues

Good news for businesses using Microsoft’s Azure cloud platform: their infrastructure bills may be shrinking come February.

Microsoft announced that it will be permanently reducing the prices for its Dv2 compute instances by up to 17 percent next month, depending on the type of instance and what it’s being used for. Users will see the greatest savings if they’re running higher performance Linux instances — up to 17 percent lower prices than they’ve been paying previously. Windows instance discounts top out at a 13 percent reduction compared to current prices.

Right now, the exact details of the discount are a little bit vague, but Microsoft says that it will publish full pricing details in February when they go into effect. Dv2 instances are designed for applications that require more compute power and temporary disk performance than Microsoft’s A series instances.

They’re the successor to Azure’s D-series VMs, and come with processors that are 35 percent faster than their predecessors. Greater speed also corresponds to a higher price, but these discounts will make Dv2-series instances more price competitive with their predecessors. That’s good news for price-conscious users, who may be more inclined to reach for the higher-performance instances now that they’ll be cheaper.

The price changes come after Amazon earlier this week introduced scheduled compute instances, which let users pick out a particular time for their workloads to run on a regular basis, and get discounts based on when they decide to use the system. It’s a system that’s designed to help businesses that need computing power for routine tasks at non-peak times get a discount.

Microsoft’s announcement builds on the company’s longstanding history of reducing prices for Azure in keeping with Amazon’s price cuts in order to remain competitive. Odds are we’ll see several more of these cuts in the coming year as the companies continue to duel to try and pick up new users and get existing users to expand their usage of the cloud.


Click here to view complete Q&A of 70-246 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-246 Training at


Bad actors race to exploit Juniper firewall vulnerability

by admin ·

Efforts afoot to reverse engineer the flaw and create commodity exploits

Now that Juniper has created a patch for its vulnerable firewall/VPN appliances, bad actors are setting to work reverse engineering the flaw so they can exploit devices that users don’t patch, and also make a profit by selling their exploits to others.

UPDATE: Wired reports a Dutch security firm claims it found the backdoor to ScreenOS within six hours of receiving the patch. Also, Reuters reports the Department of Homeland Security is investigating and CNN says the FBI is investigating as well.

“That’s what they do,” says John Pironti, president of IP Architects, who says he spent Friday responding to concerns about the compromised Juniper firewalls with his clients.

The pattern cyber criminals follow after vendors patch vulnerabilities is to compare the patched code to the unpatched code, figure out what the flawed code was and figure out how to use it to break into the device and the network it protects, Pironti says.

In this case Juniper says the flaw can be exploited to completely compromise a NetScreen firewall/VPN appliance via unauthorized remote administrator access via telnet or SSH, wipe out logs that would reveal the attack, and decrypt VPN traffic.

Once the reverse engineers do that, they’ll start trying out the exploit on whatever NetScreen devices they can locate in real-world networks hoping to find ones that aren’t patched, Pironti says. After that the exploits will go up for sale in underground markets and wend their way into open source penetration-testing platforms such as metasploit.

Inevitably some users fail to apply critical patches for years and years after they have been issued, he says. “It will be used for years,” he says. “This will not go away overnight.”

Since attackers can erase any trace they exploited a NetScreen appliance, IT security teams should start checking logs in the devices in line behind the firewall/VPNs. They should look for consistent and persistent traffic originating from unfamiliar and atypical IP address ranges that could represent the attackers moving inside the network once they’ve cracked the appliance, Pironti says. “See if they tried to get elsewhere,” he says.

Meanwhile, as of Friday, Juniper had yet to answer some key questions about the bad code.

In response to emails seeking more information, Juniper reiterated part of its initial announcement about the patches and provided a link to its formal advisory, but that’s it.

Is there any way to find out if the vulnerability has been exploited in a particular device?

“I think that Juniper does owe us more information,” says Joel Snyder, senior partner in Opus One, a technology consultancy that has tested network firewalls for Network World. “In any case, I think that Juniper should be forthcoming with more information to let us know if they think that this was put in accidentally, on purpose, and by whom.”

It’s possible the bug was put there by a nation-state, he says, but “I would guess that it is just as likely that this is a human error and someone put something in ignorantly or for debugging that they forgot to take out.”

“People have been quick to say that this is linked to the NSA/InfoSec community in the [U.S. government], but I seriously doubt that. … This was something IN the code, and it was introduced in the last few years after the product was REALLY mature.”

But the wording of the Juniper announcement – it pins the problem on “unauthorized code” – makes Pironti think it was an implant, software placed in the operating system intentionally to facilitate attacks. “Unauthorized code, to me, means an implant. It’s not like someone fat-fingered an entry.”


Click here to view complete Q&A of 70-341 exam

MCTS Training, MCITP Trainnig

Best Juniper Certification, Juniper Training at



Hitch your IT career to a rising star with DevOps certification

by admin ·

Hitch your IT career to a rising star with DevOps certification

Savvy IT industry watchers have probably been noticing something called “DevOps” come gliding into view for a while now, striking regular pings on the scope of anyone scanning for either hot trends or spiking salaries. Even proponents of DevOps, however, sometimes struggle to define it in layman’s terms, a challenge that anyone who has ever tried to explain development methods like Agile or Scrum to someone outside of IT will understand. Beneath the jargon, however, there’s an important development model that is quickly gaining in popularity. If you’re involved in IT, then this is something that’s probably worth taking the time to understand.

What is DevOps?

DevOps is a compound of “development” and “operations.” It’s a software development method that stresses communication, collaboration, integration, automation, and measurement of cooperation between software developers and other information technology professionals. DevOps is often shown graphically as three overlapping circles consisting of Development, Quality Assurance, and Information Technology Operations, with DevOps being the area of overlap that ties all three circles together.

DevOps is so much more, however, than the intersection of three circles. It’s often the intersection of five or ten circles — it just depends on the company that the DevOps is supporting. DevOps spans the entire delivery pipeline. This includes improved deployment frequency, which can lead to faster time to market, lower failure rate of new releases, shortened lead time between fixes, and faster mean time to recovery in the event of a new release crashing or otherwise disabling the current system. Simple processes become increasingly programmable and dynamic when using a DevOps approach, which aims to maximize the predictability, efficiency, security, and maintainability of operational processes. Automation often supports this objective.

DevOps integration targets product delivery, quality testing, feature development, and maintenance releases in order to improve reliability and security and provide faster development and deployment cycles. Many of the ideas (and people) involved in DevOps come from the enterprise systems management and agile software development movements.

DevOps aids in software application release management for an organization by standardizing development environments. Events can be more easily tracked as well as resolving documented process control and granular reporting issues. Companies with release/deployment automation problems usually have existing automation but want to more flexibly manage and drive this automation without needing to enter everything manually at the command-line.

Ideally, this automation can be invoked by non-operations employees in specific non-production environments. The DevOps approach grants developers more control of the environment, giving infrastructure more application-centric understanding.

The adoption of DevOps is being driven by factors such as:

● Use of agile and other development processes and methodologies
● Demand for an increased rate of production releases from application and business unit stakeholders
● Wide availability of virtualized and cloud infrastructure from internal and external providers
● Increased usage of data center automation and configuration management tools
● Increased focus on test automation and continuous integration methods

According to David Geer, 42 percent of IT pros surveyed had adopted or planned to adopt DevOps development approaches (Information Week, 2014 DevOps Survey). That number ballooned to 66 percent of U.S. companies using DevOps approaches by the time of a Rackspace survey only 10 months later. With DevOps clearly taking over the coder’s realm, most programmers will eventually have to yield to and master this mindset.

What does DevOps mean for a programmer’s profession?
There’s a lot of interest in DevOps in the IT world right now.DevOps introduces developers to operational requirements and the tools and methods necessary to ensure that the code they create is immediately functional, of high quality, and fit for the production environment. With solid training in these tools and methods, developers should find their talents highly sellable in a career world that is increasingly favorable to DevOps practitioners.

Adam Gordon, CTO of New Horizon Computer Learning Centers of south Florida, sats that important developer skills for DevOps environments include automating configuration management (infrastructure lifecycle management) using vendor-neutral tools such as Puppet, Chef, Ansible, SaltStack, and Docker. These tools integrate with a host of popular platforms and software including Amazon EC2, Amazon Web Services, CFEngine, Cisco, Eucalyptus, Google Cloud Platform, IBM Bluemix, Jelastic, Jenkins, Linux (various distributions), Microsoft Azure, OpenStack, OpenSVC, Rackspace, Rightscale, Salt, SoftLayer, Vagrant, VMware, and a rapidly expanding number of examples.

Some of the most popular vendor-specific DevOps platforms include those from Microsoft and VMware, says Gordon. Microsoft’s DevOps-related products include System Center with its System Center Configuration Manager (SCCM) and System Center Operations Manager (SCOM). These Microsoft developer tools enable functions such as automated configuration management, monitoring, and custom management pack development. VMware tools such as vCloud Air (vCloud Hybrid Service) bridge the VMware development platform to tools such as Puppet and Chef, according to Gordon, while the vRealize cloud management platform automates infrastructure and application delivery, monitoring, analytics, and management.

Finally, Red Hat Linux developers will find that learning to deploy this distribution can be useful for work in Red Hat-related DevOps environments.

Does everyone love DevOps?
No, not everyone. Take Jeff Knupp, for instance. In an April 2014 blog, Knupp claims that DevOps is “killing the developer.” Allow me to quote directly from Mr. Knupp’s post:

“There are two recent trends I really hate: DevOps and the notion of the ‘full-stack’ developer. The DevOps movement is so popular that I may as well say I hate the x86 architecture or monolithic kernels. But it’s true: I can’t stand it. The underlying cause of my pain? This fact: not every company is a start-up, though it appears that every company must act as though they were.

“DevOps is meant to denote a close collaboration and cross-pollination between what were previously purely development roles, purely operations roles, and purely QA roles. Because software needs to be released at an ever-increasing rate, the old ‘waterfall’ develop-test-release cycle is seen as broken. Developers must also take responsibility for the quality of the testing and release environments.

“The increasing scope of responsibility of the ‘developer’ (whether or not that term is even appropriate anymore is debatable) has given rise to a chimera-like job candidate: the ‘full-stack’ developer. Such a developer is capable of doing the job of developer, QA team member, operations analyst, sysadmin, and DBA. Before you accuse me of hyperbole, go back and read that list again. Is there any role in the list whose duties you wouldn’t expect a ‘full-stack’ developer to be well versed in?

“Where did these concepts come from? Start-ups, of course (and the Agile methodology). Start-ups are a peculiar beast and need to function in a very lean way to survive their first few years. I don’t deny this. Unfortunately, we’ve taken the multiple technical roles that engineers at start-ups were forced to play due to lack of resources into a set of minimum qualifications for the role of ‘developer.’ ”

“Imagine you’re at a start-up with a development team of seven. You’re one year into development of a web application that Xs all the Ys, and things are going well, though it’s always a frantic scramble to keep everything going. If there’s a particularly nasty issue that seems to require deep database knowledge, you don’t have the liberty of saying, ‘That’s not my specialty,’ and handing it off to a DBA team to investigate. Due to constrained resources, you’re forced to take on the role of DBA and fix the issue yourself.

“Now expand that scenario across all the roles listed earlier. At any one time, a developer at a start-up may be acting as a developer, QA tester, deployment/operations analyst, sysadmin, or DBA. That’s just the nature of the business, and some people thrive in that type of environment. Somewhere along the way, however, we tricked ourselves into thinking that because, at any one time, a start-up developer had to take on different roles, he or she should actually be all those things at once.

“If such people even exist, ‘full-stack’ developers still wouldn’t be used as they should. Rather than temporarily taking on a single role for a short period of time, then transitioning into the next role, they are meant to be performing all the roles, all the time. Most good developers can almost pull this off.”

Certifications in DevOps
The DevOps certification realm is taking root quickly. One organization that is out in front of the pack, however, is Amazon Web Services. If you want to make a strong move into DevOps, then consider any of the following credentials.

AWS Certified DevOps Engineer – Professional

The AWS Certified DevOps Engineer – Professional exam validates technical expertise in provisioning, operating, and managing distributed application systems on the AWS platform. Exam concepts you should understand for this exam include the ability to:

● Implement and manage continuous delivery systems and methodologies on AWS
● Understand, implement, and automate security controls, governance processes, and compliance validation
● Define and deploy monitoring, metrics, and logging systems on AWS
● Implement systems that are highly available, scalable, and self-healing on the AWS platform
● Design, manage, and maintain tools to automate operational processes

Required Prerequisite: status as AWS Certified Developer – Associate or AWS Certified SysOps Administrator – Associate

● Two or more years’ experience in provisioning, operating, and managing AWS environments
● Experience in developing code in at least one high-level programming language
● Experience in automation and testing via scripting/programming
● Understanding of agile and other development processes and methodologies

Multiple choice and multiple answer questions
170 minutes to complete the exam
Exam available in English
Exam registration fee is $300

DevOps is a hot trend in software development right now.AWS Certified SysOps Administrator – Associate

The AWS Certified SysOps Administrator – Associate exam validates technical expertise in deployment, management, and operations on the AWS platform. Exam concepts you should understand for this exam include:

● Deploying, managing, and operating scalable, highly available, and fault tolerant systems on AWS
● Migrating an existing on-premises application to AWS
● Implementing and controlling the flow of data to and from AWS
● Selecting the appropriate AWS service based on compute, data, or security requirements
● Identifying appropriate use of AWS operational best practices
● Estimating AWS usage costs and identifying operational cost control mechanisms


No prerequisites; recommend taking System Operations on AWS

● One or more years of hands-on experience in operating AWS-based applications
● Experience in provisioning, operating, and maintaining systems running on AWS
● Ability to identify and gather requirements to define a solution to be built and operated on AWS
● Capabilities to provide AWS operations and deployment guidance and best practices throughout the lifecycle of a project

Multiple choice and multiple answer questions
80 minutes to complete the exam
Available in English, Japanese, Simplified Chinese, and Brazilian Portuguese
Practice Exam Registration fee is $20
Exam Registration fee is $150

AWS Certified Developer – Associate

The AWS Certified Developer – Associate exam validates technical expertise in developing and maintaining applications on the AWS platform. Exam concepts you should understand for this exam include:

● Picking the right AWS services for the application
● Leveraging AWS SDKs to interact with AWS services from your application
● Writing code that optimizes performance of AWS services used by your application
● Code-level application security (IAM roles, credentials, encryption, etc.)

No prerequisites; recommend taking Developing on AWS

● One or more years of hands-on experience in designing and maintaining an AWS-based application
● In-depth knowledge of at least one high-level programming language
● Understanding of core AWS services, uses, and basic architecture best practices
● Proficiency in designing, developing, and deploying cloud-based solutions using AWS
● Experience with developing and maintaining applications written for Amazon Simple Storage Service, Amazon DynamoDB, Amazon Simple Queue Service, Amazon Simple Notification Service, Amazon Simple Workflow Service, AWS Elastic Beanstalk, and AWS Cloud Formation.

Multiple choice and multiple answer questions
80 minutes to complete the exam
Available in English, Simplified Chinese, and Japanese
Practice Exam Registration fee is $20
Exam Registration fee is $150

AWS Certified Solutions Architect – Professional

The AWS Certified Solutions Architect – Professional exam validates advanced technical skills and experience in designing distributed applications and systems on the AWS platform. Example concepts you should understand for this exam include:

● Designing and deploying dynamically scalable, highly available, fault tolerant, and reliable applications on AWS
● Selecting appropriate AWS services to design and deploy an application based on given requirements
● Migrating complex, multi-tier applications on AWS
● Designing and deploying enterprise-wide scalable operations on AWS
● Implementing cost control strategies

Status as AWS Certified Solutions Architect – Associate
● Achieved AWS Certified Solutions Architect – Associate
● Two or more years’ hands-on experience in designing and deploying cloud architecture on AWS
● Abilities to evaluate cloud application requirements and make architectural recommendations for implementation, deployment, and provisioning applications on AWS
● Capabilities to provide best practices guidance on the architectural design across multiple applications, projects, or the enterprise

Multiple choice and multiple answer questions
170 minutes to complete the exam
Exam available in English and Japanese
Practice Exam Registration fee is $40
Exam Registration fee is $300

AWS Certified Solutions Architect – Associate

Intended for individuals with experience in designing distributed applications and systems on the AWS platform. Exam concepts you should understand for this exam include:
● Designing and deploying scalable, highly available, and fault tolerant systems on AWS
● Lift and shift of an existing on-premises application to AWS
● Ingress and egress of data to and from AWS
● Selecting the appropriate AWS service based on data, compute, database, or security requirements
● Identifying appropriate use of AWS architectural best practices
● Estimating AWS costs and identifying cost control mechanisms

None, but it is recommended that candidates take the Architecting on AWS and AWS Certification Exam Readiness Workshop

● One or more years of hands-on experience in designing available, cost efficient, fault tolerant, and scalable distributed systems on AWS
● In-depth knowledge of at least one high-level programming language
● Ability to identify and define requirements for an AWS-based application
● Experience with deploying hybrid systems with on-premises and AWS components
● Capability to provide best practices for building secure and reliable applications on the AWS platform

Multiple choice and multiple answer questions
80 minutes to complete the exam
Available in English, Japanese, Simplified Chinese, Korean, French, German, Spanish, and Brazilian Portuguese
Practice Exam Registration fee is $20
Exam Registration fee is $150

There’s a lot of interest in DevOps in the IT world right now.AWS Certified DevOps Engineer – Professional

The AWS Certified DevOps Engineer – Professional exam validates technical expertise in provisioning, operating, and managing distributed application systems on the AWS platform. Exam concepts you should understand for this exam include the ability to:
● Implement and manage continuous delivery systems and methodologies on AWS
● Understand, implement, and automate security controls, governance processes, and compliance validation
● Define and deploy monitoring, metrics, and logging systems on AWS
● Implement systems that are highly available, scalable, and self-healing on the AWS platform
● Design, manage, and maintain tools to automate operational processes

AWS Certified Developer – Associate
AWS Certified SysOps Administrator – Associate

● Two or more years’ experience in provisioning, operating, and managing AWS environments
● Experience in developing code in at least one high-level programming language
● Experience in automation and testing via scripting/programming
● Understanding of agile and other development processes and methodologies

Multiple choice and multiple answer questions
170 minutes to complete the exam
Exam available in English
Exam registration fee is $300

Click here to view complete Q&A of 70-697 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-697 Training at


IT pros average 52-hour workweek

by admin ·

Employees in small IT departments tend to work more hours than those in large IT departments

It’s no surprise that a majority of IT pros work more than 40 hours per week, but it’s interesting to learn that some are putting in significantly longer workweeks, according to new survey data from Spiceworks.

Among 600 IT pros surveyed, 54% said they work more than 40 hours per week. At the high end of the overtime group, 18% of respondents said they work more than 60 hours per week, and 17% said they top 50 hours per week. The average workweek among all respondents is 52 hours, Spiceworks reports.

The data comes at a time when hiring managers say it’s tough to hire experienced talent and IT pros say they’re more willing to switch jobs for a better offer. Companies claim to be boosting pay and increasing benefits and perks to entice employees – yet technical talent averages 10+ hours per day, according to the Spiceworks data.

Network jobs are hot; salaries expected to rise in 2016
When it surveyed respondents about IT staffing practices, Spiceworks hoped to find a consensus about the ideal IT staff-to-user ratio that would enable adequate incident response times without overworking IT staff. The company – which offers free management software and hosts a community for IT pros – didn’t come up with any universal formula, but it did share information about staffing trends across multiple industries and different sized companies. Here are a few of the survey findings.

Industry plays a big role in IT workload
IT pros who work in government and education are less likely to work extra hours than those in other industries. In education and government, only 33% and 37% of staff, respectively, work more than a 40-hour week.

In the construction/engineering and manufacturing industries, workweeks exceeding 50 hours are the norm. Construction/engineering is at the high end of the scale, with 72% of staff working long hours. In manufacturing, 60% of staff work more than a 40-hour week.

Large IT departments share workloads more effectively
Spiceworks found a correlation between the size of IT departments and the number of hours worked. Organizations with 40-hours-or-less workweeks tend to have larger IT departments (an average of 17 employees). Conversely, smaller IT departments tend to require more than 40 hours per week. The average overworked IT department has 10 or fewer staff members.

Helpdesk size, in particular, shapes the workload
Solving end users’ problems is one reason IT staff is overworked, Spiceworks concludes. Its survey found that IT pros in departments with more dedicated helpdesk technicians work fewer hours on average, while IT pros in departments with fewer helpdesk technicians tend to work more than 40 hours per week. Specifically, organizations with 40-hours-or-less workweeks have an average of 9 helpdesk technicians; organization with more than 40-hour workweeks have an average of 3 helpdesk technicians.

Click here to view complete Q&A of 70-246 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-246 Training at



Former Marine fights to connect veterans with IT jobs

by admin ·

One consulting firm’s hiring program aims to place U.S. military veterans in IT engagements.
The transition to corporate life can be challenging for military veterans. Companies aren’t used to hiring veterans, whose resumes are unlikely to make it past their keyword-filtering software. Veterans aren’t used to articulating their military experience in business terms, nor are they accustomed to typical workplace culture and communication. Far too often, uniquely skilled veterans returning from Iraq and Afghanistan hear the same disheartening message — that they’d make great security guards.

Nick Swaggert, a former infantry officer with the U.S. Marine Corps, sees untapped talent in these returning soldiers, and he’s committed to helping them find career opportunities in the tech world. Swaggert is Veterans Program Director at Genesis10, an outsourcing firm that provides IT consulting and talent management services. His job is to recruit veterans, help them translate their military experience to relevant corporate experience, and find a place for veterans to work at Genesis10’s clients.

Swaggert knows firsthand what it’s like to see a military career reduced to the output of a military skills translator (software that’s designed to match military skills, experience and training to civilian career opportunities).

“I was in the Marine Corps infantry. Backpack and guns type of thing. So what does it say for me? I can be a security guard,” Swaggert says of the typical automated skills translator. “Someone in the infantry probably pulled a trigger less than 0.1% of the time. They probably spent a lot of their time in logistics, leadership, setting up communications assets, organizing supply chains. These are all things we did, but my job says I pulled a trigger.”

In reality, the infantry experience varies widely for today’s service men and women – including Swaggert, who was sent to the Syrian border, 300 miles from the nearest base. “I needed to make sure that the supply chain — helicopters were flying us supplies — was optimized. When you live in a space the size of a conference room table, or you’re on a vehicle, there’s not a lot of room for error in terms of too much or too little supplies,” he recalls. “I needed to learn how to set up a satellite radio, to send digital pictures of smugglers we were catching back to the base. Using a very high-tech radio and a rugged laptop in a sandstorm, I learned to problem-solve communications assets. That doesn’t come across in a translator.”

When Swaggert left the Marine Corps, he found a new mission: helping veterans find civilian jobs that make use of their myriad talents.

“I got out in 2010. I was told time and time again, ‘Nick, you seem like a really great

guy, but you just don’t have the experience that we’re looking for.’ That’s what led me to go and get my master’s degree and become passionate about it. This is a huge opportunity. There’s a huge miss here in communication. Someone needs to be out there, proselytizing.”
computerworld salary survey carousel hiring
Network jobs are hot; salaries expected to rise in 2016

Wireless network engineers, network admins, and network security pros can expect above-average pay

Why and how you should secure digital documents

The days when IT could autocratically dictate how employees access stored data and network traffic…
Genesis of an idea

Swaggert also understands what it’s like to be an enlisted person and an officer — a rare perspective for veterans of the typically stratified U.S. military. He enlisted in the Marines right out of high school. He was later selected for an officer training program, which allowed him to get a college degree while in the Marines.

After getting his degree, Swaggert was commissioned as an officer in 2005. He wanted to be an infantry officer, even though a friend advised him to pursue a more hirable assignment in communications or logistics. “I said ‘no way, that’s not going to happen. I’m going to go serve my country on the front lines.’ Then I came home, and like many other people, saw that doesn’t help me.”

Even with a college degree, his path to a corporate career wasn’t always smooth.
Swaggert applied and was rejected for a corporate program that’s designed to train and certify military veterans in computer networking. “My ASVAB — Armed Services Vocational Aptitude Battery — it’s like the military SAT. It shows how well you can learn new jobs. I scored in the 96th percentile of all service members. They don’t look at that, though. They just say, ‘well, he was in the infantry, he can shoot guns. There’s no way he could possibly learn network stuff.’ This is exactly why people can’t get jobs.”

When young, college-educated officers leave the military, they’re often recruited through junior military officer (JMO) training programs at companies such as Deloitte, PwC, General Electric and PepsiCo. Companies compete to hire these service members, many of whom got their college degrees, served four years in the military, and are set to enter the business world at a young age having amassed significant leadership experience. “They have their degrees, the path is laid out for them, and they’re heavily recruited,” Swaggert says.

It’s a different world for enlisted men and women, most of whom leave the military without a college degree. Even if they get their degrees after serving in the military, it can be hard to find work. “An officer goes to college for four years, then serves for four years. An enlisted guy serves four years, then goes to college for four years. After eight years they’re fairly equivalent, but one group is highly employed and the other group is heavily underemployed,” Swaggert says.

Nationwide, the unemployment rate for military veterans who served after 9/11 was 9% in 2013, according to data from the U.S. Bureau of Labor Statistics. That’s down from 9.9% the year before, but well above the overall unemployment rate for civilians, which was 7.2% during the same period. The numbers are particularly bleak for the youngest veterans, aged 18-24, who posted a jobless rate of 21.4%.
c2 crew b

Nick Swaggert (center), pictured with the crew of his command and control vehicle during a break while patrolling the Syrian/Iraqi border.

“Being an officer, you gain a tremendous amount of experience and have tremendous leadership opportunities. The other group has been given similar, but not as extensive, experience. That’s where we think there’s a business opportunity,” Swaggert says.

At Genesis10, employees see the value of U.S. military experience in the corporate world. It’s a view that comes from the top. Harley Lippman is the CEO and owner of the $185 million privately-held firm, which is based in New York. Lippman participated in a program that brings groups of U.S. service-disabled veterans to Israel, and when he saw how well Israel treats its veterans – with comprehensive health services and job assistance, for example — Lippman was inspired to launch his company’s program on Veterans Day in 2011. Swaggert joined the effort in mid-2013. “Harley is a visionary, and he saw that there’s a huge opportunity to tap into this untapped talent vein,” Swaggert says.

The firm is realistic about placing former soldiers. Some of the roles Genesis10 envisions U.S. military veterans helping fill include project manager, business analyst, testing analyst, storage administrators, database administrators, network engineers, midrange server specialists, and problem and incident management positions.

“We have clients who need Java developers with 10 years of experience. I’m not pretending Joe Smith off the street is going to do that,” Swaggert says. “But there are needs such as entry-level data entry, business analyst, quality assurance — stuff veterans will do really well, very process-oriented roles. Veterans are very detail-oriented. We have checklists for everything we do. If you don’t dot an ‘i’ or cross a ‘t’ an artillery round lands on your location.”

Part of Genesis10’s strategy is to connect veterans with companies that want to hire returning soldiers but are unsure how to go about it.

One hurdle is that many companies don’t know how to find veterans. It’s not enough to post typical job descriptions on veteran-focused job boards or at military recruiting fairs. “That doesn’t mean anything to a veteran. You’re not recruiting by job code — everyone in the military has a job code. You’re not recruiting by rank — rank equals experience,” Swaggert says. “You have to tailor that.”

He’s understanding of the conundrum for hiring managers. “On the company side, I don’t blame them,” Swaggert says. “Hiring managers don’t have experience hiring veterans. We are such a small fraction of the population. You can’t expect them to know and understand.”

Another part of Genesis10’s strategy is to prepare veterans for workplace culture, not only by tweaking resumes but also through interview coaching and soft-skills development. Communication is a key element.

“Veterans have different communications styles. In the military, we call it BLUF — it’s an acronym that stands for ‘bottom line up front.’ You state the bottom line. In the military, you walk up to someone at their desk, or wherever, and you just tell them what you want,” Swaggert says. Civilians communicate differently, and veterans need to learn to deal with the differences.

Veterans also need to learn how to interview. In the military, higher-ups look at soldiers’ service records to determine who moves up the ranks. “That interviewing skill just completely atrophies — if it was ever there in the first place and most likely it wasn’t,” Swaggert says.

For companies that are open to hiring veterans, Genesis10 can smooth the process. The company understands that there’s risk associated with trying new hiring approaches. “We’ve built a program to try to mitigate that risk,” Swaggert says. “We flat out say in our presentation, ‘we are here to mitigate the risk of hiring a veteran.'”

Still, it’s not always an easy sell. “There’s a reason why veterans don’t get hired. If it were easy it would already have been done. You have to invest time and effort. I wish I could say it’s just rewriting a resume. But it’s not.”

The most challenging part of Swaggert’s job is trying to find companies that are willing to hire veterans.

“My number one job is not to find veterans. I could stroll down to the nearest base, or post a job online looking for U.S. Military veterans. The hard part is walking into the companies. I’ve talked to a lot of CIOs, a lot of VPs, saying, ‘do you guys want to hire veterans?’ They all say yes, and they say, ‘well how do we do it?’ We talk about selection, training, mentoring, and onboarding and getting them to commit to that kind of investment.”

Success is hearing “’yes, I’m going to force my people to hire someone who’s a little bit different.’”

Swaggert joined the Reserves to stay connected to the military, and as a commanding officer in the Reserves, he flies monthly to Ohio. “The Marine Corps is very important to me. It will always be very important to me,” Swaggert says. “I’m not wearing a uniform every day, but I’m definitely doing military-related things daily.”

“There are plenty of people like me, who joined the military during a time of war, who are really smart people who said, ‘I want to serve on the front lines, because that’s what this country needs.'”

Now that they’re home, he wants to help them find work.



Click here to view complete Q&A of 98-361 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 98-361 Training at



The most expensive PCs in computing history

by admin ·

Raspberry Pi-like specs at recreational vehicle prices — as computed in today’s dollars

The most expensive PCs in computing history
We all like to complain about our computers and devices. It’s our inalienable right as 21st-century digital age consumers. In fact, a case can be made that it’s become something of a national pastime.

But the truth is, in the year 2015, we have access to unprecedented computing power for our spending dollar. Click around online retail sites or visit your local big box store, and you’ll find startling numbers in those specification charts — whether you’re looking at desktop systems, laptops, mobile devices, or the emerging spaces in between. Processors measured in gigahertz. Hard drives measured in terabytes. Display technologies straight out of science fiction.

How good do we have it, relatively speaking? One way to crunch the numbers is to turn back the clock. Here we take a look at some of the more expensive systems ever put to market in the personal computing era, along with their technical specs and pricing at the time. To make things manageable, we limited our archaeological dig to pre-1999 desktop and portable computers marketed to individual users, with some detours into hybrid variations and multiuser systems. Hold on to your wallets, it’s about to get weird.

The Programma 101 (1965)
Surely one of the coolest machines in this history of computing simply from a naming point of view, the Programma 101 was an Italian device that many consider the very first desktop computer. In an era when computers were the size of Buicks, the Programma looked like an Art Deco typewriter and made its debut at the 1964 New York World’s Fair.

The Programma was a kind of supercalculator — it could add, subtract, multiply, and divide huge numbers. But because it could also load and record programming sequences on magnetic cards, most historians consider it a genuine desktop PC. NASA purchased several of the machines to plan the Apollo 11 moon landing. Each device cost about $3,500 ($24,000 today), making it easily the most expensive PC of its time — considering it was the only PC of its time.

IBM Portable Computer (1975)
About the size of a small suitcase and weighing in at 55 pounds, the IBM 5100 Portable Computer was marketed at the time as the world’s first mini-computer. Indeed, it was one of the first (relatively) portable computers and was aimed primarily at scientists — well, scientists with plenty of university grant money. The top-end 64KB model had a list price of $19,975. That’s around $88,000 today, adjusted for inflation.

For your investment, you got a state-of-the-art, self-contained machine. The 5100 boasted an integrated 5-inch CRT display and magnetic tape drive. The display could output 16 lines of text, with 64 characters each. The quarter-inch cartridge tape drive could store 204KB. Absent a true CPU as we know them today, the 5100 used a circuit board processor called PALM, for Program All Logic in Microcode, which included a 16-bit data bus.

Cromemco System Three (1979)
Founded by two Stanford doctoral students, Cromemco was a California computer company named after the Stanford dormitory reserved for engineering Ph.D. students (Crothers Memorial Hall). In the late 1970s and early 1980s, the company made several key innovations in the area of computer peripherals, including technology for cameras, joysticks, and graphic cards.

In 1979, the company released its System Three multiuser computer, designed to accommodate between one and six terminals and a printer attached to the heavy central chassis. It was a nice option for certain buyers — NASA and U.S. Air Force were early adopters — with a top-end configuration that boasted 512KB of RAM and a 5MB external hard drive. System Three was capable of running both Fortran IV and Z80 Basic, and the company’s Cromix was the first Unix-like OS available for microcomputer systems. List price: $12,495 in 1979; around $36,000 today.

The Apple Lisa (1983)
In January 1983, a little company called Apple put a profoundly curious specimen on the market called the Lisa. The first personal computer with a mouse and GUI successfully marketed to mainstream buyers, it was a giant step for user-friendliness. (Yes, the Xerox Alto came 10 years earlier, but was never sold to the public.)

For a machine aimed at Joe Computer User, the ticket price on the Lisa was an alarming $9,995 — the equivalent of almost $24,000 today. For your money, you got a 5MHz Motorola CPU, 1MB of RAM, and a 12-inch monochrome display. An external 5MB drive was offered as an option, or the external dual floppy drives had 871KB of storage capacity each — but they were notoriously unreliable. FYI, the Lisa was indeed named after Steve Jobs’ daughter, although later marketing efforts came up with the backronym Local Integrated System Architecture.

Osborne Vixen (1985)
One of the first “luggable” computers, the Osborne Vixen split the difference between desktop and portable with a unique design. The attached keyboard folded down and out of the front casing, lifting up the front of the system — the better to view that dazzling 7-inch CRT display.

After some corporate drama and delays, the Vixen was released in 1985 with a 4MHz processor, 64KB of RAM, and dual disk drives. It also came bundled with a generous suite of software, including programs for word processing, spreadsheets, business graphics, and even a side-scrolling adventure game. The Vixen is a good example of how even generous midrange systems could set you back in the 1980s. Add in the optional 10MB external hard drive, and the Vixen cost $2,800 — around $6,200 now.

Apple Macintosh Portable (1989)
Apple’s first portable Macintosh was designed to be a fast and powerful alternative to the laptop designs available in the late 1980s. And it was, at the time. But it’s interesting to crunch the numbers versus Apple’s laptop offerings 25 years later.

The Macintosh Portable was built around a 16MHz Motorola CPU, with 1MB of RAM (expandable to 9MB). The two-pound lead acid battery — a miniature automotive battery, essentially — provided around six hours of power with typical usage. The hard drive held 40MB of data, and the display provided 640-by-400 resolution, monochrome. The optional onboard modem: 9,600 baud. (Younger readers will want to Google that.) Consider those numbers and reflect that the Macintosh Portable sold for $6,500 in 1989 — around $12,500 today.

Risc PC (1994)
U.K. computer company Acorn — occasionally referred to as the “British Apple” — made a series of popular systems across the pond in the 1980. In 1994, the company released its next-generation Risc PC system. In addition to an innovative case design that allowed for easy and extensive expansion, the Risc PC featured a second CPU slot for running IBM PC-compatible software alongside Acorn software running on the Risc OS.

Spec-wise, you got dual processors, a 420MB hard drive, and a 17-inch monitor. The numbers get a little tricky, but U.K. list price for a fully loaded RISC PC 600 in 1994 would convert to around $3,000 in U.S. money. Further adjusted to today’s prices, that’s about $5,000.

Dell Dimension XPS T600 (1999)
The market for personal computers crested in the late 1990s, and soon nearly everyone had a computer at home, at work, or both. Outlier instances of crazy expensive computers gradually faded away, with a few exceptions, as the market matured and prices stabilized. Various price points emerged for different kinds of computing needs. But still — some computers were more expensive than others.

For our last twist of the time-travel dial, let’s go back a mere 15 years, to those heady days shortly before the turn of the millennium. Thanks to the Internet Archive’s Wayback Machine, we can see PCWorld magazine’s Best Buy pick for Power PCs in December 1999. The Dell Dimension XPS T600 topped the charts that month, with its Pentium III-600 CPU, 128MB of RAM, 20GB hard drive, and 17-inch CRT display. Average retail price? $2,300, or about $3,400 today.


MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at



Five signs an employee plans to leave with your company’s data

by admin ·

A global high-tech manufacturer had reached its boiling point after several of its sales reps left the company unexpectedly and took with them sales leads and other data to their new employers.

The company needed to stop the thefts before they happened. So the company hired several security analysts who manually looked at the behavior patterns for all sales reps working on its cloud-based CRM system, and then matched them with the behaviors of those who ultimately quit their jobs. What they were able to correlate was startling.

Sales reps that had shown a spike in abnormal system activity between weeks nine and 12 of a financial quarter generally quit at the end of week 13 – in many cases because they knew they weren’t going to meet their sales quotas, says Rohit Gupta, president of cloud security automation firm Palerra, which now works with the manufacturer.

These abnormal behaviors included one or all of these warning signs — doing mass exports of lead information, entering parts of the system where they don’t usually go, changing object information, deleting items, and doing any of these things from home or in the office on a Saturday afternoon.

With these early warning indicators, IT staff was able to put controls in place to stop massive downloads before they happened or freeze accounts for several hours until a manager had a chance to speak with the employee.

Today, cloud security automation tools make easier work of detecting these warning signs. “Predictive analytics is important, not just prevention or detection, but getting ahead of the curve,” says Gupta. Palerra’s LORIC is one of a handful of cloud security automation tools which has ventured into predictive analytics capabilities for the cloud on top of security configuration management, threat detection and automated incident response — and it comes at a critical time.

A thriving economy means greater opportunity for job seekers, and therefore more job turnover. In May 2015, the US Bureau of Labor Statistics reported 4.7 million total employee separations, 2.7 million of which were “quits,” or voluntary separations initiated by the employee. But lately, it’s become easier for those employees to leave the company with more than just their 401K plan and a box of pens.

Employees are taking valuable company data with them that is stored in the cloud in CRM systems like Salesforce, collaboration tools such as Microsoft Office 365 or storage sites like Box and Dropbox.

“It’s just so easy to access, download and transfer data these days – in fact, the company doesn’t even know it’s happening,” says Eric Chiu president of cloud security automation firm HyTrust. “On the flipside, it’s difficult to track” all the data that is out there and secure data against an authenticated user, he adds.

Half of all employees who left their posts in 2013 took company data with them, and 40 percent planned to use that data in their new job, according to a study by Symantec and the Ponemon Institute.

In January, Morgan Stanley fired one of its financial advisers after it accused him of stealing account data on about 350,000 clients, potentially one of the largest data thefts at a wealth management firm.

Predictive capabilities are available from just a handful of cloud security automation vendors today, and some analysts consider predictive analytics to be in the early stages.

“There’s potential but the practical applications are still a little immature,” says Jon Oltsik, senior principal analyst at Enterprise Strategy Group. “You can tune something to look for an attack that you know about, but what’s hard is to tune it to something you don’t know about. I can look at access patterns on repositories and how much people download and whether they save documents locally. But there’s always creative ways to work around that. A really dedicated, sophisticated adversary will quickly decipher where you’re not looking – and that’s the problem.” Or they will carry out a “low-and-slow” theft by regularly moving data to a repository over time, he adds.

Still, security automation vendors continue to add predictive analytics capabilities to their platforms. In July, Splunk acquired security company Caspida to add machine learning-based user behavioral analytics and extend its analytics-enabled SIEM to better detect advanced and insider threats.The Splunk platform can search, monitor, analyze and visualize machine-generated big data coming from websites, applications, servers, networks, sensors and mobile devices.

Some users of cloud-based systems may choose to wait for predictive analytics to mature before taking the plunge. In the meantime, there are other ways to keep data from walking out the door with exiting employees, experts say.

Work with human resources
It’s important for IT security managers to communicate with the human resources department so they are aware of pending layoffs or other personnel issues that might lead to employee departures. “You have to look at whatever data is available in their corporate environment, such as an HR data source. If an employee has a termination date or is being terminated for any reason, then you have to look at that person’s system activities with increased scrutiny,” says Andras Cser, vice president and principal analyst at Forrester Research, serving security and risk professionals.

Monitor third-party storage
Many companies have measures in place that will automatically stop unauthorized use of internal systems or keep users from downloading data, but what about cloud storage sites that are out of their direct control?

“You can have solutions like CloudLock, BetterCloud and others that tie to APIs of a cloud service like Dropbox, Box or Salesforce,” Cser says. “If the solution sees that I’m downloading 300-times the usual data volume that I normally look at, then it can send an alert.”

“Encrypt [sensitive] data so that if it’s taken offsite, then it is no longer useful. Controls, monitoring and data security on the inside can prevent bad things from happening,” Chiu says.

Use automation
Cloud apps are typically siloed and not connected in the network, so it’s difficult to put controls in place across the board. “The result is – if there are separate owners responsible for managing Workday, Google Apps or Box, for instance, then those administrators have to do the right thing” and put the right monitoring and controls in place, Gupta says. “That’s all the more reason for cloud security automation. If you have a monitoring framework doing this 24/7 in an automated fashion, then the enterprise has someone to watch their back.”


MCTS Training, MCITP Trainnig

Best Microsoft 70-640 Exam Trainnig, Microsoft 70-642 Training at



Office 2016 adopts branches, update-or-else strategy of Windows 10

by admin ·

Enterprise subscribers to Office 365 get “Current Branch” and “Current Branch for Business” update and upgrade tracks

Microsoft yesterday said it will launch Office 2016 for Windows on Sept. 22, and detailed how it will deliver updates and upgrades with a cadence and rules set similar to Windows 10’s.

Office 2016 will be “broadly available” starting Sept. 22, said Julie White, general manager of Office 365 technical product management, in a Thursday post on the team’s blog. Organizations with volume license agreements, including those with Software Assurance, will be able to download the new bits beginning Oct. 1.

Week after next, subscribers to Office 365 Home and Personal — the consumer-grade “rent-not-own” plans that cost $70 and $100 yearly — may manually trigger the Office 2016 for Windows download at In October, Office 2016 will automatically download to those subscribers’ devices. The applications will be updated monthly after that, with vulnerability patches, non-security bug fixes and new features and functionality.
[ Get the latest tech news with Computerworld’s daily newsletters. ]

Consumers are locked into that monthly tempo, and like those running Windows 10 Home, must take the updates as they automatically arrive.

But for Office 2016 in businesses, Microsoft plans to reuse the update-and-upgrade release pace pioneered by Windows 10. Office 365 will offer both a “Current Branch” and a “Current Branch for Business,” just as does Windows 10.

Current Branch (CB) will update monthly and potentially include new or improved features, security patches and non-security bug fixes. Current Branch for Business (CBB), on the other hand, will issue updates every four months, with the same potential content. In the months that Microsoft does not deliver a CBB update, it will issue only security fixes to customers who adopt the branch.

Failure to deploy the next CB update means customers won’t receive future security updates. For CBB, businesses may defer deployment of the next update — four months later — but must adopt the one after that, or face a patch stoppage.

Office 365 CBB users, in other words, can retain the feature set of Office 2016 no longer than eight months (two updates). If CBB 1 appears, as Microsoft has pledged, in February 2016, then customers may skip the June 2016 CBB 2 but must deploy October 2016’s CCB 3 or be severed from security updates.

Those rules and the CBB tempo are also identical — although not necessarily on the same calendar schedule — as Windows 10’s.

Some Office 365 customers will be able to use only the CB: Those include organizations that have subscribed to Office 365 Business and Office 365 Business Professional, plans that currently cost $8.25 and $12.50 per user per month.

Firms that subscribe to the pricier Office 365 ProPlus, Office 365 Enterprise E3 or Office 365 Enterprise E4 plans may opt for the CBB track. Those plans run from $12 to $22 per user per month.

That, too, is identical to Windows 10, in that the operating system offers leisurely update cadences only to those running the more expensive Windows 10 Pro and Windows 10 Enterprise.

There will be no analog to Windows 10’s “Long-term Servicing Branch,” or LTSB, the track that eschews all but security patches for extremely long stretches.

Microsoft may not have spelled it out, but the existence of CB and CBB tracks also plays to its new strategy of passing testing responsibilities to customers, another characteristic of Windows 10. Those running the CB will, in effect, serve as guinea pigs as changes roll out to them monthly; their feedback and complaints will be used by Microsoft to tweak or fix problems before the code reaches customers running the CBB.

Although Microsoft has burdened Office 365 and the locally-installed Office 2016 apps that compose the core of a subscription with a slew of new terms and rules, the changes are in some ways more clarification than procedural, argued Wes Miller, an analyst at Directions on Microsoft.

“Before, we didn’t know when these [Office 365] updates were coming,” said Miller. “Now, they’re giving us the classifications of what updates will come when.”

The similarities of the Windows 10 and Office 365 release rhythms; the lexicon, including CB and CBB; and the patch stick brandished to motivate customers to update, are all intentional, Miller added. “Microsoft’s giving relatively similar nomenclature for its two major desktop endpoints, Windows and Office,” he said.

But Miller contrasted how Office 365 — which currently is based on the Office 2013 application suite — is managed by organizations with the methods outlined for Office 2016 within the subscription plans.

Now, once a business adopts Office 365, it points workers to the Office 2013 downloads. They install the applications locally on their devices, and from that point, Microsoft, not the organization, “owns” the maintenance via updates.

“If an IT team wanted to own Office maintenance, it had to download the transformation tools [the Office Customization Tool, or OCT], take the installer from Microsoft and modify it,” said Miller. The IT-derived installer would then be offered to employees. “From that point, the organization owns the updating,” Miller continued. He called the process “a little burdensome” — an oft-heard complaint from business subscribers and their supporting IT staffs.

Under Office 2016, shops that subscribe to Office 365 will be able to more easily “own” the updating process by selecting the appropriate branch for each employee or groups of employees. While IT will still rely on the OCT to craft custom installers, the revised tool — not yet available — will support branch selection, Microsoft said in a support document.

The multiple update tracks Microsoft has outlined will only apply to Office 2016 within an Office 365 subscription, Miller said. Traditional licenses, dubbed “perpetual” in that once paid for they can be used for as long as desired, will not be able to adopt the CB or CBB. That’s in keeping with Microsoft’s long-running scheme to make Office 365 more attractive than perpetual licenses, whether purchased by consumers one at a time or by businesses in bulk, by virtue of its accelerated release schedule.

Office 2016’s debut later this month will also start a clock on Office 2013 for Office 365 enterprise subscribers.

“You can continue to use and receive security updates for the Office 2013 version of Office 365 ProPlus for the twelve months after the release of Office 2016,” Microsoft told users. “After 12 months, no additional security updates will be made available for the Office 2013 version. Therefore, we strongly recommend that you update to the Office 2016 version within the first twelve months that it’s available.”

The first CB of Office 2016 will be released Sept. 22, and the first CBB update will appear some time in February 2016. Microsoft has not yet set the price of individual perpetual licenses sold at retail, or even said whether those would go on sale this month: The company did not reply to questions about retail availability.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at