Daily Tech Digest - October 12, 2018


The first step in reducing TCO is understanding what it is and why current solutions are driving it so high. A data protection TCO analysis should do what its name implies; calculate the TOTAL cost of ownership. For data protection this means adding up all the hard costs like data protection storage, the data protection network and data protection software. It should also include periodic costs like hardware and software maintenance (including support) as well as subscription costs like cloud storage or cloud compute. Calculating data protection infrastructure TCO also means adding up the operating costs associated with learning and operating the data protection system. Most data protection solutions are not self-service or designed for IT generalists; they need a well-trained administrator familiar with the infrastructure to interact with it. Operating costs are particularly important because certain complicated data protection tasks – like a full restore – will require a knowledgeable person to complete.



Taking Agile Transformations Beyond the Tipping Point

Not all leaders can make this transition. For example, one Asia-Pacific company undergoing an agile transformation replaced one-quarter of its top 40 leaders with individuals who better embodied agile values, such as collaboration and teamwork. Middle managers will also face challenges. Those who have grown up inside silos will need to learn how to manage cross-functional teams and delegate decision making to employees closer to the field. They may even need to return to doing the daily work rather than only managing other people. The coordination activities that consumed so much of managers’ time are increasingly handled within and between teams. While agile may be a fundamentally different way of working, many of the steps to become an agile organization are familiar to any executive who has gone through a successful corporate transformation. (See Exhibit 2.) The steps of committing, designing, preparing, and refining are variations of any large-scale change.


Detail of Dutch reaction to Russian cyber attack made public deliberately


The attackers used a rental car parked close to the OPCW building in The Hague. The hackers then attempted to use Pineapples to break into the WiFi network of the organisation. Pineapples are devices usually used for intercepting network traffic. The hackers were also caught using antennas and signal amplifiers, and other equipment the MIVD considers “specifically used during hacking operations”. During the operation, the MIVD found laptops with extra batteries (which the MIVD said were purchased in the Netherlands), and mobile phones with 4G connectivity, which the hackers tried to destroy during their arrest. Eichelsheim reiterated that the excuse the Russian might’ve simply been on holiday won’t fly. “They were caught with very specific equipment, entered on diplomatic visas, and were found carrying €20,000 and $20,000 in cash. That’s not a holiday.”


A Day In The Life Of Ms. Smith: How IoT And IIoT Enhance Our Lives

Ms. Smith walks out of the building. An RFID reader at the door scans her badge as she walks past it. Computer vision sees her approaching the exit and walking into the parking lot. The drive home is much like her drive to work. Computer vision devices on the road monitor and control traffic signals. Her ride home is slow—but again, she misses most of the red lights. Fifteen minutes before she gets home, the thermostat automatically turns on the heat (or cooling) so that the temperature is comfortable when she comes in the door. Finally at home, she walks inside, and the lights turn on. To relax, she turns on the TV, and the lights in the room automatically dim, making it easier for her to watch her favorite show. As she’s ready for bed, she says, “Turn down the lights,” to her digital assistant. “Oh, and wake me up at 5:30,” she says. “No, make it 6.” Lights in the other parts of her house dim, the lights in her bedroom slowly fade, and so does Ms. Smith.


5 CRM trends for 2018

5 CRM trends for 2018
Applying machine learning to CRM data has been a difficult process for most organizations. To do this traditionally you would need machine learning expertise on staff, developers and the drive to build the solution. Alternatively, you would have to build and maintain integration between your CRM system and an external machine learning service. That’s starting to change. “Machine learning is now built directly into CRM products,” explains Julian Poulter, research director for CRM and CX (customer experience) at Gartner. “We have seen about 30 use cases applying machine learning to CRM, but industry adoption is slow so far. The use cases include recommending alternative products, lead scoring and ecommerce recommendations.” That means the kinds of product recommendation features offered by Amazon and other ecommerce providers are within reach of many more organizations. But that’s not the only way machine learning can help.


Spinnaker is the Kubernetes of Continuous Delivery

Despite its humble and slow start, Spinnaker is enjoying widespread adoption. Today, Spinnaker is backed by industry leaders like Microsoft, Google, Netflix, Oracle and so on. It’s supported by all major cloud providers, including but not limited to, AWS, Google Compute Platform, Microsoft Azure and OpenStack. Spinnaker users include big names like Capital One, Adobe, Schibsted, LookOut and more. There is a growing vendor ecosystem around it which includes players like Mirantis, Armory and OpsMx. ... There were roughly 400 people at the event, representing over 125 companies and over 16 countries. During the Summit, the community announced the governance structure for the project. “Initially, there will be a steering committee and a technical oversight committee. At the moment Google and Netflix are steering the governance body, but we would like to see more diversity,” said Steven Kim, Google’s Software Engineering Manager who leads the Google team that works on Spinnaker.


Anomaly detection methods unleash microservices performance

AKF cube diagram
A symptom-manifestation-cause approach involves working back from external signs of poor performance to internal manifestations of a problem to then investigate likely root causes. For example, the symptom of increased response times can be tracked to the internal manifestation of excess latency in message passing between the app's services, which occurred because of a failing network switch. Other potential root causes exist for those same symptoms and manifestation, however. For example, an application design using overly large message requests, or too many small messages, would cause the same issue. These root causes would be found by different tools and resolved by different people. Change-impact analysis creates broad categories that lump together changes in component-level metrics based on their effect on external performance measures. These metric categories might include network link latency, database queue depth and CPU utilization, grouped according to assessments such as excessive resource usage, cost overages or response time.


Unlock distributed analytics with a microservices approach


Combining BI and analytics software with a microservices approach enables average end users to drill down into data with specific types of queries. When it comes time to visualize that data, organizations must decide whether to build customized visualization tools in-house or adopt a third-party option. A vast number of options exist for visualization, which include web-based platforms and stand-alone, open source tools. These tools tend to focus on a range of data interaction, from complex depictions of near-time data to simple renderings. However, big data sources have their limitations. Streaming and unstructured data sources present challenges that mainstream analytical tools struggle to depict. For example, some query connections won't accept data set blending, which limits exploratory analysis. Teams may also encounter system timeouts, out-of-memory exceptions, long query waits and rendering limitations. However, distributed analytics approaches can excel in big data.


Digital transformation in 2019: Lessons learned the hard way

Because of the focus on the technology components, the people-side of the changes required for digital transformation often go under-addressed, yet arguably are the key success factors. That's because the people in the organization have to carry out the digital transformation, yet are often inadequately equipped to do so from a skill, culture, mindset, inclination, and talent perspective. Many organizations have had their digital change initiatives crash upon the shoals of insufficient human capability to carry them out or an inadequately enabling environment. Currently, lack of appropriately skilled personnel ranks in the top five obstacles to digital transformation and is reported by 39 percent of orgs. The good news is that improved organizational focus and improved techniques for upskilling workers to support digital transformation have been arriving. Expect to see both more in 2019. The smart digital leader will use the resources of HR's L&D department to help drive them.


Multicloud does not eliminate vendor lockin

Multicloud does not eliminate vendor lockin
You might think you can avoid the trade-off by using containers or otherwise writing applications so they are portable. But there is a trade-off there as well. Containers are great, and they do provide cloud-to-cloud portability, but you’ll have to modify most applications to take full advantage of containers. That could be an even bigger cost than going cloud-native. Is it worth the avoided lockin? That’s a question you’ll need to answer for each case. Moreover, writing applications so they are portable typically leads to the least-common-denominator approach to be able to work with all platforms. And that means that they will not work well everywhere, because they are not cloud-native. I suppose you could write portable applications that are cloud-native to mutiple clouds, but then you’re really writing the application multiple times in advance and just using one instance at a time. That’s really complex and expensive. Lockin is unavoidable. But lockin is a choice we all must make in several areas: language, tooling, architecture, and, yes, platform.



Quote for the day:


"Leadership cannot just go along to get along. Leadership must meet the moral challenge of the day." -- Jesse Jackson


Daily Tech Digest - October 11, 2018

No company seems to safe anymore. In 2018 alone, we have seen the social media giant Facebook reporting data breaches twice affecting millions of users each time. As if this wasn’t enough, a couple of days ago Google reported exposing the data of more than 500K users of its social network Google+ between 2015 & March 2018. The ironical part is that Google reported no misuse of data but in response to this incident has decided to completely shut down the portal, Huh? Apparently, Google didn’t disclose this earlier citing fear of regulatory scrutiny. Wondering if we should still trust these tech giants with our personal data? European Data regulation like the GDPR is a step in the right direction in protecting customers’ data & these tech companies are now facing multi billion dollar lawsuits. On the side note, the Crypto industry is facing a similar situation with more than $927 million worth of digital money stolen to date this year — 3.5 times more than 2017.


The benefits of IAM processes, strategies for digitized companies


"Companies are using more and more systems than they ever have before. They're collecting more data, [and] the employees' job roles are changing faster," he said, adding that identity access management sits at the nexus of all those dynamics. Consider how an employee may require access to specific data or certain applications to work on a project, but will not need that access on an ongoing basis, he said. IT should be capable of changing access rights of not just that employee, but dozens, hundreds or even thousands of employees, as needed. However, not all organizations are maturing their IAM practices, Maxim said. "There are still a lot of companies that are doing very little with IAM -- they're working on spreadsheets, or they've reached a limit to what they could do with their homegrown systems," he said. However, he noted that many of them are "actively looking to find ways to streamline what they're doing."


Successful data-driven companies must balance human and machine roles

The anticipated redistribution of work between humans and machines may displace 75 million jobs, but it’s likely to create as many as 133 million new ones, too, according to the report. This major shift in jobs may not reassure those of you who believe technology is a threat to your role. But the reality is that smarter technologies provide an amazing opportunity to focus on the ways that we create the most value for our organizations. Creativity and strategic thinking remain distinctly human advantages. When paired with the increased processing capacity of machines, there is plenty of room to be optimistic about the future. Understanding the strengths of people vs. machines We are far from a reality where we can trust machines to make business decisions with human-like judgment and contextual understanding. Today, we trust machines to automate tasks and analysis in areas that are heavily parameterized and minimally risky. 


Disaster Recovery: Data Center or Host Infrastructure Reroute


Regardless of which approach you take, even if everything works flawlessly, you still need to address the ‘brownout’ phenomenon or the time it takes for services to be restored at the primary or to a secondary location. It is even more important to automatically send people to a different location if performance is impaired. Several people have heard of GSLB, and while many use it today, it is not part of their comprehensive DoS approach. But it should be. If your goal with your DDoS mitigation solution is to ensure an uninterrupted service in addition to meeting your approved performance SLA; then dynamic GSLB or infrastructure based performance load balancing has to be an integral part of any design. We can deploy this technology purely defensively, as we have traditionally done with all DoS investments or we change the paradigm and deploy the technology to help us exceed expectations. This allows us to give each individual user the best experience possible.


Suspected NASA Hacker Busted After Boasting About Exploits

The suspect was identified after a year-long investigation by the Polizia Postale - Italy's postal police - via its CNAIPIC group, which since 2008 has served as the national anti-crime computer center for the protection of critical infrastructure. It regularly investigates cybercrime. After identifying the suspect, police say they executed a search that resulted in the seizure of computing devices, which have tied the suspect to attacks against at least 60 Italian websites. In addition, rather than just being a member of the "Master Italian Hackers," the suspect appears to have been one of its leaders, authorities say. The Italian suspect is the latest in a long list of admitted hackers whose "too much information sharing" habits got them in trouble. To pick just one example: Last year, Russian-born Alexander Konstantinovich Tverdokhlebov, who emigrated to the U.S. in 2007, later becoming a naturalized citizen, pleaded guilty in U.S. federal court to having been "an active member of several highly exclusive Russian-speaking cybercrime forums."


SoftBank has a lot to worry about if it strikes this deal with WeWork


It’s very possible that the talks for SoftBank Vision Fund to invest up to $20 billion into WeWork will fail, Recode was told. Here are some of the hazards that could trip up either side over the next few weeks. The Vision Fund’s single biggest outside investor, the Saudi government, which holds a 45 percent stake, is under increasing political scrutiny after allegations it is behind the disappearance of U.S.-based Washington Post journalist, Jamal Khashoggi. Backing from foreign governments has always loomed as a major liability for venture capital investors. The SoftBank-Saudi ties are not new. But the Khashoggi revelations make it particularly bad timing for a deal, as WeWork could face reputational risk for taking money from a government that’s embroiled in such a high-profile human rights case. “If all that’s alleged is true, WeWork will be in bed with a regime that has expressed brazen disregard for virtually any norm of international politics,” said Chris Meserole, a foreign policy fellow at The Brookings Institution.


3 things you should do to prevent cyber attacks

The threat landscape is constantly evolving, with cyber criminals always looking for new exploits and studying one another’s tactics. As soon as a particular exploit proves successful, crooks the world over will adopt and refine it.  The majority of successful attacks come in the immediate aftermath of the popularisation of a particular attack method. That’s because its success is predicated on the fact that many organisations are vulnerable to it. Once the trend becomes common knowledge, organisations learn how it works and address it.  You can greatly minimise your chances of coming under attack by staying informed about growing trends. There are many ISACs (Information Sharing and Analysis Centres) that you can use to gather real-time threat intelligence.   When it comes to addressing new attack methods, processes and policies are relatively resilient and will perhaps only need to be tweaked. You are much more likely to need to update your software and web applications.


Automate everything or get left behind

Automate everything or get left behind image
Discovery and auto-monitoring. Sophisticated monitoring solutions use an increasing range of methods, including direct access to hosts via SSH and indirect access via configuration repositories like ActiveDirectory and services like Windows Discovery, to extract facts from existing infrastructure and speed up monitoring configuration by operators. Leading-edge products are now moving towards automating the process completely: creating comprehensive maps of infrastructure, apps, and complete business services and monitoring these things without the need for any manual intervention or direction. Alert processing, notification, escalation, integration. Alerting is, of course, a powerful form of automation. It entails decision-making, which may be simple or significantly more complex (e.g., several metrics, from separate systems, have entered states predictive of a particular kind of known failure for a critical business service). It involves sophisticated assignment and escalation based on issue, team rotas, time/date and other variables.


SD-WAN Adolescence Is About Interoperability and Scalability

SD-WAN-Adolescence-Is-About-Interoperability-Scalability
Almost everyone (enterprises, CSPs, standards bodies, vendors) I spoke with acknowledges that data-plane interoperability in SD-WAN is unlikely in the near future. Enterprises and CSPs are telling me they don’t need it yet. This brings back memories of the old IPsec interoperability wars. Trying to create interoperable meshes of nodes from disparate vendors today is really putting the cart before the horse because we need to start from the control plane. CSPs that are in the process of building or customizing their orchestration systems to integrate with SD-WAN offerings say that having interoperability at the control and management level allows for coordination between multiple SD-WAN domains. It also makes switching vendors feasible with much less pain. The present efforts focus on interoperability at the northbound API level. They govern the APIs used to provision and control SD-WAN deployments.


Security warning: Attackers are using these five hacking tools to target you

Perhaps the most potentially damaging of the dangers detailed in the report are remote access trojans - malware which is secretly installed onto an infected system providing a backdoor to observe all activity and enabling the attacker to carry out commands which lead to data being stolen. The particular example given in the report is JBiFrost, a trojan typically employed by low-skilled cyber criminals but with the capability to be exploited by state actors. What makes JBiFrost so potent is that it is cross-platform, with the ability to operate on Windows, Linux, MAC OS X and Android. Often delivered via a phishing email, it allows attackers to move across networks and install additional software. This particular RAT is publicly available and the cyber security agencies said they have observed it being used in targeted attacks against critical national infrastructure owners and their supply chain operators.



Quote for the day:


"The level of morale is a good barometer of how each of your people is experiencing your leadership." -- Danny Cox


Daily Tech Digest - October 10, 2018

security threats and vulnerabilities
Underlying all this likely nonsense is the obvious fact that almost every computer chip in the world is made outside of the U.S., often in Asian locations. I used to laugh when I was told that I couldn’t bring my Lenovo laptop in, but I could bring in my Dell laptop, which itself was full of nothing but Asian-made chips. If you are worried about supply chain threats, and you should be, it’s not just one little purported spy chip you should be worried about. You can’t find a computerized device in the U.S. that doesn’t have foreign-made chips. There isn’t some secret U.S. government agency that goes around inspecting all those chips for security holes or backdoors before they get put into all our computers. To me it is a hilarious idea that the Chinese would have to insert a specialized, tiny spy chip when it would be far easier to put an intentional weakness or backdoor into any of the hundreds of chips that are used in every computer on the planet. It would be far easier to hide in the weeds than to create a dedicated spy chip that any hardware expert would notice and question.



Overcoming the top obstacles to digital transformation success

You should begin developing a solid digital transformation strategy by first establishing a small, integrated governance team with equal representation and influence from the business and IT, including security. The governance team will enable a clear line of communication between digital and legacy IT teams and ensure initiatives are synchronized so appropriate investments are made to harden core systems while securely exposing functionality that enables digital initiatives. While security was not cited as one of the top three barriers, it remains a concern. With the highly-fragmented state of data across most enterprises today, exposing data sources to new digital systems creates yet another opportunity for attack. IT and security are integral to governance to limit risk exposure as new digital capabilities are introduced. As you launch digital initiatives, especially if you are behind the digital curve, partner with digital leaders who can provide the capabilities you need to get your products to market securely while you continue learning and developing internally.


IT departments struggle to balance innovation with everyday IT operations


“Organisations have become acutely aware of the critical role technology now plays in overall business strategy, from enabling a more productive and connected workforce to increasing market share and customer loyalty,” she said. “The Insight Intelligent Technology Index signifies how competing demands on IT are inhibiting their ability to plan and innovate.” The index, which queried 200 IT professionals, also found 79% of IT decision makers felt there were not enough resources to effectively support the demand for innovation, with another 33% saying innovation was expected of them despite existing processes, practices and business operations not evolving in ways that allowed them to do so. Another 30% cited a lack of clearly defined roles and responsibilities in the organisation as a reason for the lack of innovation.


CEO Fraud: Barriers to Entry Falling, Security Firm Warns

To hide their efforts, attackers may alter the rules for a compromised email account to divert copies of their fraudulent messages - and potentially replies - to other, attacker-controlled accounts, Digital Shadows notes. Such fraud can take the form of false invoices or modifying legitimate ones, but swapping in details for accounts controlled by attackers. Because BEC scams typically exploit weak corporate controls, organizations can use many actions to better defend themselves, Digital Shadows says. One of the most basic steps is to ensure that email accounts always have two-step verification enabled. That at least prevents an attacker that has the login credentials from accessing the account. Controls around wire transfers can also be shored up, Digital Shadows says. Fraudsters have had success, for example, by compromising the email account of a CEO and then sending an email to the finance department saying a payment needs to be made.


Discovering Blind Spots in the Data


Usually, there’s a trade-off between precision and recall. Improving precision can drop the recall and vice-versa. It’s up to the business stakeholders to tell the data scientists, which is more important: identifying more actual escalations at the cost of having more false escalations classified as escalations (high recall, low precision)? Or minimizing false escalations at the cost of missing many actual escalations (low recall, high precision)?  If the business stakeholders go for high recall and low precision, they will need to engage more people to deal with a higher number of real escalations and possibly many false escalations. If they choose low recall and high precision, they can engage fewer people to deal with the escalations but will risk having model miss many real escalations. In our case, initially, the Business stakeholders preferred high precision over low recall so that they didn’t have to deal with a lot of false escalation alerts. Our dataset had a few features whose value changed with time. This introduced us to a phenomenon called signal leakage.


Why 60% of IT security pros want to quit their jobs right now

retain.jpg
The main reasons cited by the IT pros who wanted to leave were job dissatisfaction and the lack of growth opportunities within their companies, said the release. The survey gathered data from more than 9,000 IT security professionals and decision-makers in the enterprise, said the release. This survey could give businesses better insight into how to retain and support their current tech talent. Other top reasons for employees looking to quit include unhealthy work environments (53%), absence of IT security prioritization from executives or upper management (46%), unclear job expectations (37%), and lack of mentorship (30%), said the release. Buy-in from upper management is crucial for security efforts, since only 38% of CEOs are really engaged in cybersecurity. This low engagement percentage is proof that executives don't prioritize cybersecurity as much as other factors of business, which further validates the dissatisfaction IT professionals are feeling.


NASA is using HoloLens AR headsets to build its new spacecraft faster


In the headset, the workers can see holograms displaying models that are created through engineering design software from Scope AR. Models of parts and labels are overlaid on already assembled pieces of spacecraft. Information like torquing instructions—how to twist things—can be displayed right on top of the holes to which they are relevant, and workers can see what the finished product will look like. The virtual models around the workers are even color-coded to the role of the person using the headset. For Jory’s team, which is currently constructing the heat shield skeleton of Orion, the new technology takes the place of a 1,500-page binder full of written work instructions. Lockheed is expanding its use of augmented reality after seeing some dramatic effects during testing. Technicians needed far less time to get familiar with and prepare for a new task or to understand and perform processes like drilling holes and twisting fasteners. These results are prompting the organization to expand its ambitions for the headsets: one day it hopes to use them in space.


Why today's containers and microservices will be tomorrow's legacy sooner than you think

The industry will be stuck with container platforms because these are interesting technologies that give the operators a taste of the power of running massive jobs at scale. Unfortunately, the ROI of maintaining that platform is elusive, since very few companies running these platforms will ever reach a point where they can even optimize job scheduling, and the cost of maintaining the container platform itself competes with the modest improvements in the developer's user experience. A similar phenomenon was seen with OpenStack half a decade ago, when, in the rush to have an in-house cloud, many companies grossly underestimated the short- and long-term associated costs and are now stuck maintaining OpenStack in perpetuity for the sake of the unnamed applications running on top of it. Well, that's a depressing thought, isn't it? But true. And why? Well, because technology change is hard. 


“Given the way the data was captured and displayed, it would not be readily available or searchable, but [the information commissioner] considers that a motivated individual could locate and extract the data in a more permanent way,” the notice said. Although the USB stick contained more than 1,000 files overall, just 1% of this information could be classified as being personal in nature. Also, a subsequent investigation by the ICO revealed less than 2% of the airport’s 6,500-strong workforce had received data protection training.  “Given that Heathrow Airport is Europe’s busiest airport, where high-level security should be inherent, loss or unauthorised disclosure of personal data of staff could have presented a greater risk if found by individuals who had not handled the data responsibly,” the penalty notice said. “Taking into account all of the above, the commissioner has decided that the penalty is £120,000.” According to the report, the USB stick was found in Kilburn, west London, on 16 October 2017, before being handed in to a national newspaper 10 days later


Behavioral Biometrics: Key Challenges

As more companies move away from passwords toward behavioral biometrics, they face new challenges, says Rajiv Dholakia, vice president products at Nok Nok Labs, a company which is into next generation authentication. Behavioral biometrics relies on a behavioral trait of an individual, rather than a physical trait. Examples include speech patterns, signatures and keystrokes. "There are no standards as such in this area on how the information is collected, how it's stored and how it's processed," Dholakia says in an interview with Information Security Media Group. "And therefore, there may be some privacy hazards associated with the technique unless a manufacturer makes it super clear exactly what is being collected, how it's being processed and whether that profile data is anonymized." Other behavioral biometrics issues include accuracy and concerns about passive collection of information from users, he says. "Moreover, when you are using behavioral biometrics, you have to be super certain that the information coming from all sensors is coming from a real device as opposed to a virtual machine," he says



Quote for the day:


"He who cannot be a good follower cannot be a good leader." -- Aristotle


Daily Tech Digest - October 09, 2018

NoOps Future of Cloud Networking
Recent technology advancements in cloud networking point the way toward a NoOps approach to cloud networking, meaning the automation of processes that now depend on direct control by human networking experts. One advancement is that the as-a-service revolution is finally reaching cloud networking infrastructure. What began with virtualized networking hardware has more recently progressed to software-defined (SD) cloud routing, enabling distributed, heterogeneous networking infrastructure to traverse public cloud, on-premises, hybrid cloud and multicloud environments. SD cloud routing centralizes and automates networking functions that previously required hands-on, time-consuming attention by highly certified human experts. As a result, SD cloud routing shifts networking infrastructure control directly to CloudOps and DevOps engineers, who are no longer dependent on networking professionals to establish or maintain their cloud networking infrastructure.


8 key elements of an effective disaster recovery plan

As the Southeast U.S. continues to recover in the aftermath of Hurricane Florence, it’s important for IT leaders to consider the effect hurricanes and other natural disasters have on healthcare information needs - both now, in the future and before the next challenge strikes. The sobering fact is more than half of organizations (58 percent) are not ready for a major loss of data. In fact, 60 percent will go bankrupt within six months, according to data from Washington, D.C.-based research firm Clutch. ... In order to prepare for a disaster, organizations need a strong DR plan and must be willing to go beyond above and beyond it in implementation. Not only should a company build its core processes into a DR plan and have a team that is designated for DR tasks but it should also perform a risk assessment to best determine what challenges might arise and how dangerous each of those elements is. One way to do this is through security penetration testing, in which an organization tests its system’s security by trying to exploit its weaknesses. Since disaster recovery is extremely important in healthcare complianceand other regulatory industries, it is best to also incorporate compliance into security and DR planning.


Cooperation vital in cyber security, says former Estonian minister


Looking to the future, Kaljurand said states should come together and continue the UN GGE process. “But they have to change the process. If they want the process to be serious and respected, if they want it to be adopted by a wider number of states, [the process] has to be open, transparent and inclusive.” The challenge for states and governments is to find ways of cooperating so that those who want to contribute will have the chance to be part of the process, said Kaljurand. “The UN has to lead by example and say that ‘multi-stakeholder’ means all stakeholders: governments, businesses, industry, civil society, academia and the technical community,” she said. Within the context of the UN, Kaljurand said Western democracies should be much more active in promoting their understanding of the use of information and communication technologies and how technology can change countries in terms of economy, governance, people, education and awareness.


Scaling your developer community with plugins

29% of Foreman survey respondents want help to contribute
Scale the user community, and the developer community should grow too. But, how? Attracting users isn't easy either—there's a reason we have marketing. Unless you have a big budget for events, materials, adverts, etc., scaling the user community isn't much easier than scaling the developer one. So we have limited options for attracting users, and virtually none for the developers. What are we left with? Well, for ideas we could look to the ways users become developers. You'll always have a few people that were destined to become contributors—the right mix of domain knowledge, interest/drive, and programming skill. But there's a larger group who have an itch to scratch but perhaps aren't so confident in diving into the full code base. These are the people we need to target. As evidence, here's some data from the most recent Foreman Community Survey (the community I work in) showing nearly one-third of the 160 respondents would like to contribute but don't know where to start.


Wi-Fi 6 is coming to a router near you

The basic technology behind Wi-Fi 6, which is still known as 802.11ax on the technical side, promises major advances beyond just higher data rates, including better performance in dense radio environments and higher power efficiency. Wi-Fi 6 is also seen as a possible communications method for internet-of-things (IoT) devices that have low power capabilities and limited battery life. Thanks to a feature called target wake time, Wi-Fi 6 IoT devices can shut down their Wi-Fi connections most of the time and connect only briefly as scheduled in order to transmit data they’ve gathered since the last time, thus extending battery life. Farpoint Group principal and Network World contributor Craig Mathias said that, given the degree to which consumerization is the driving force even behind enterprise IT these days, the re-naming is probably a step in the right direction, but that doesn’t mean that simply labeling 802.11ax as Wi-Fi 6 tells the whole story.


Mapping the Market for Agile Coaches


By no means did we arrive at definitive answers but what we learned is an important contribution to the field. In fact, we believe that this article contains more information on agile coach compensation than has ever been available in one place. In gathering this information, we focused on using only the highest quality, verifiable data. As such the data below comes from only three sources: Agile coaching positions that the five agile coaches who participated in the meeting have had, mostly over the last two years; Positions that the authors have been offered, mostly over the last year; and Information that close, trusted friends of the authors have provided about their coaching positions. ... When extrapolating this information to other situations we caution: The authors are mid-level agile coaches and above. As such, entry level positions are likely to be under-represented; and The authors are located in the greater San Francisco Bay area, one of the most expensive areas in the world. Although not all of the positions are based in this geography, most of them are.


Below the Surface, Microsoft is not the new Apple

When Surface was announced, there was much debate around whether Microsoft could break the pattern of doom when OS licensors compete with their licensees. The company has largely avoided conflict by focusing its device portfolio to compete most directly with Apple's and by embracing Surface Pro-like products from other PC manufacturers. Surface's success has likely emboldened Google and Amazon to produce their own devices while continuing to seek broad licensing. One could even argue that Surface has provided more incentive for Microsoft to step up efforts such as its own stores and retail areas within Best Buy that also feature licensees' PCs. But think about that quip about the Surface not being a PC. What does that say about the merits of other products that are PCs and bound by the same version of Windows? The contrast between the treatment received by Surface and licensees was not pretty when, at its 2017 fall education event, Microsoft introduced the premium Surface Laptop, while the third-party announcements were focused on low-margin, Alcantara-bereft laptops.


How can IT put Windows 10 containers to use?


To use Windows 10 containers with Docker, IT must enable Microsoft Hyper-V on the endpoints it plans to deliver the container to. Microsoft supports two types of Windows containers: Windows Server Containers and Hyper-V Isolation Containers. A Windows Server Container runs directly on the host and shares the host's kernel. Only Windows Server can host Windows Server containers. A Hyper-V Isolation Container runs in a highly optimized virtual machine, making it more secure than a Windows Server Container. Both Windows Server and Windows 10 can host Hyper-V Containers, which is why IT must enable Hyper-V on Windows 10 machines. Docker containers are well-suited for Agile application delivery scenarios, especially for applications based on a microservice architecture where the services within an app run separately from one another. IT can easily create and deploy Windows 10 containers on developer desktops and testing machines and then implement them in production deployments.


Intel, AMD both claim server speed records

Intel, AMD both claim server speed records
Even more impressive is that all systems tested included mitigations for the Spectre and Meltdown vulnerabilities. To mitigate those flaws in the CPU, some functions have to be disabled either in software or at the firmware level, and that can mean performance hits, sometimes significant hits. It shows Intel has a per-core advantage because its top Xeons are 28-core, whereas the AMD Epyc is 32-cores. And one benchmark shows Intel’s AVX512 extensions clobber Epyc, which enables floating point instructions and impacts compute, storage, and networking functions. So, it’s a big deal. The down side? They might be hard to get. Last week CFO and interim CEO Robert Swan published a letter saying that the company was suffering from a shortage of chips due to increasing demand, but Swan assured customers the company would be able to meet demand. The good news for data center operators is that the problem seems to be more on the PC side than server side.


Four critical KPIs for securing your IT environment

So, what should you be measuring when it comes to your security program? As the old saying goes: If you can’t measure it, you can’t manage it. Here are four Key Performance Indicators (KPIs) that can help enterprises navigate the murky waters of cybersecurity and reduce anxiety surrounding the possibility of cyber attacks. ... One practice that is key to this KPI is patching, so be sure to document patch cycles. However, some assets like industrial control systems, stamping presses or systems for other industrial uses may not be able to be patched. Many times, the manufacturer of the equipment will not support an updated operating system. In the case that patching it is not an option, the next best step is to use application whitelisting on the asset, which ensures that it will function as a fixed purpose device. That being said, patching in and of itself is not a silver bullet: There are still many assets in which neither of these options is feasible. If that is the case, the only option is isolation—and isolating an asset in its own network segment, in many cases, is the only way to enhance security.



Quote for the day:


"One of the sad truths about leadership is that, the higher up the ladder you travel, the less you know." -- Margaret Heffernan


Daily Tech Digest - October 08, 2018

A rough guide to your next (or first) fog computing deployment

city skyscrapers emerge from foggy cloudy / environment pollution / uncertainty / unknown future
There’s a hierarchy of storage options for fog computing that runs from cheap but slow to fast and expensive. At the former end, that option is network-attached storage. A NAS offers huge storage volumes, particularly over a distributed network, but that means latency times measured in seconds or minutes. Rotating disks could work well for big media libraries or data archives, according to Byers, while providing substantially better response times. Further up the hierarchy, flash storage, in the form of regular SSDs, provides much the same functionality as a spinning platter, with the well-known tradeoff in increased price-per-GB for much faster access times. That could work best for fast bulk storage, though Byers also notes that there are concerns about access speeds dropping off after a large enough number of read/write cycles. “After you write to a given address in the chip more than about 2,000 times, it starts getting harder to reprogram it, to the point where, eventually, you’ll get write failures on that sector of flash drive,” he said.



GDPR As Catalyst: Protect Data And Grow the Business (Part 4)

A successful collaboration depends on the ability to share information quickly and easily with third-party companies, working across organizational and geographical boundaries. However, it is vital to balance the need to provide business partners with ready access to enterprise data while safeguarding valuable intellectual property and sensitive corporate information. In addition, they must meet many industry- and country-specific compliance requirements – including the General Data Protection Regulation (GDPR) for managing personal data. Data processors and controllers are both responsible for GDPR requirements for personal data, wherever that data may be in their business network. They need to be able to quickly and securely share data with partners. By using dynamic attribute-based access controls, they can classify and segregate data based on metadata, content, association, or policy; establish fine-grained, attribute-based access policies; automate access authorization based on policies; and centralize activity logging and auditing to simplify compliance reporting.


Sony Smart TV Bug Allows Remote Access, Root Privileges


The flaws – a stack buffer overflow, a directory traversal and a command-injection bug – were found by Fortinet in March by its FortiGuard Labs team. The most serious of the vulnerabilities is the command-injection (CVE-2018-16593) bug, which is tied to a proprietary Sony application called Photo Sharing Plus. The app allows users to share multimedia content from their phones or tablets via Sony TVs. “This application handles file names incorrectly when the user uploads a media file,” wrote Fortinet’s Tony Loi, who found the vulnerability. “An attacker can abuse such filename mishandling to run arbitrary commands on the system, which can result in complete remote code-execution with root privilege.” Fortinet researchers said a compromised TV could be recruited into a botnet or be used as springboard for additional attacks against devices that shared the same network. To be successful, an adversary would need to be on the same wireless network as the Sony TV.


Separating high value from low value KPIs in data governance efforts

It's not necessarily a bad thing for a business to know how many overall data quality problems happen in a specified span. But, the reason this could be a lower-value KPI in many organizations is that it is likely not adequately specific. In contrast, a KPI for resolved issues indicates if a company is making gains in remedying problems or not. Looking at the number of data quality issues also becomes more problematic if a company has numerous locations. Failing to separate data quality issues into outstanding and resolved categories could also promote inaccurate presumptions about performance. Indeed, it's best to keep the number of data quality issues as low as possible. But, it's arguably even more critical for company representatives to accurately ensure they're promptly addressing and thoroughly handling all issues. Only viewing overall issues may not represent how those problems get treated.


The first smart display for business: Your Android phone

google home
The new smart display interface will constantly show contextual information such as the time, the weather, battery status, and other data. Google released its third version of the smartphone operating system Wear OS, which comes with an improved Google Assistant feature. The biggest change: proactivity. The Wear OS Google Assistant can offer all kinds of contextual information (some of it based on personal data mined in Gmail). This makes sense, because wristwatches can gather amazing contextual data, such as user location and also whether the user is walking or sitting. I think this is a preview of what’s coming for the docked Android phone version of Google Assistant. Phones have even better contextual information than watches, because placing the phone in the dock says a lot about intention — namely that the user is not intending to leave and go somewhere else, but plans to stay in a single place and may want hands-free notifications and assistance.


Microsoft halts rollout of Windows 10 October 2018 Update: What happens next?

Via email, a Microsoft spokesperson confirmed that announcement: "We have paused the rollout of the update while we continue to investigate reports from some customers." In a tweet, Dona Sarkar, who runs the Windows Insider Program, advised anyone affected by this issue to call Microsoft's support lines: "They have the tools to get you back to a good state." The implication in that tweet (and in the language from the original bulletin) is that the files have not been deleted but are available elsewhere on the system disk. Update: Roughly 36 hours after the initial publication of the support bulletin, Microsoft edited its contents. It now reads, "If you have manually checked for updates and believe you have an issue with missing files after an update, please minimize your use of the affected device and contact us directly..." [emphasis added] In the United States, you can reach Microsoft Support at 1-800-MICROSOFT (1-800-642-7676). For Windows 10 customers in other regions, check the list of local support numbers on the Global Customer Service Phone Numbers page.


Software-defined networking security involves 3 factors


To fully protect confidentiality, it's necessary to encrypt network traffic. IT teams should also consider encrypting the control channel in the environment, which includes the communications between an SDN controller and the data plane devices that actually move packets. Moreover, if an SDN system includes any ability to cache data -- e.g., as part of a network flight recorder feature -- or if it has data compression features, it may be necessary to encrypt data stored in memory, or even on a disk, in data plane devices or the controller. SDN systems can defend themselves from attack, but this requires hardened platforms for both controllers and data plane devices. If the SDN controller is running on a poorly secured Linux server, for example, it doesn't matter how secure the SDN system riding on the nodes is at a high level. Any off-the-shelf SDN system should have a secured base -- whether Linux, CentOS or something else -- when it comes out of the box.


Juniper CEO Rahim talks network, security and multicloud trends

There’s no way to get around the biggest trend, and that is the tectonic shift to cloud and multicloud. I am not just talking about the hyperscale users either. I am talking telcos and enterprises. It’s a sign of the times that every CIO is trying to take advantage of a multicloud environment, whether it’s to build out an infrastructure to handle it or deploy an overlay or underlay – they just cannot do it by themselves. That’s why we have so heavily invested in multicloud connectivity and software services development.  The second one is the move to 5G. Overall we don’t think 5G technologies will go mainstream until next year, but the preparation for it is well underway. Cloud services and providers are developing the infrastructure and capacity to take advantage of 5G now. Security would be the third, and what we are seeing is the trend of customers tying together networks and security technologies to develop more effective policies to block malware and protect the enterprise against threats better than ever before.


Open source is the future, but it will cost you more than you think

money.jpg
Open source has never been known for being the people that sit and finish up projects. They've always sort of gotten it to a good solid point that does 80% of what you want it to do, or it works well enough but there's not great interfaces and things on it. What tends to happen is, either commercial companies like Red Hat...end[] up making it usable for them afterwards. We obviously also see the public cloud beginning take those open source projects and turn them into managed services as well. Such companies—including system integrators—do the "last mile" work necessary to get open source projects ready for enterprise consumption. Red Hat makes billions on this model, yet it still remains more of an anomaly than it should. We have MongoDB, Elastic, the combined Cloudera and Hortonworks, and other open source companies, but not nearly as many as we should, given how dominant open source has become in the area of enterprise infrastructure.


Be Prepared for Disruption: Thinking the New Unthinkables


The fear is that talk of “purpose” still remains a convenient and fashionable slogan, but empty. It may only be mainstreamed when there is hard evidence that having strong values adds money and social value to the company. Mayer is leading the charge to find rigorous data through his work at Oxford and the British Academy project on the Future of the Corporation to establish a causal link between values and value. “Until that’s the case, it’s going to be extremely difficult to persuade the investment community that they should be moving in this direction in a big way,” he says. This is despite the very clear signals now being sent by the public to both corporate and political leaders that purpose matters. These unthinkable scenarios — such as Collymore’s young customers deserting overnight — must be accepted, not ignored. The challenge is even more acute in a world of digital transformation. Artificial intelligence and biotech are bringing huge changes to society. For leaders, a new clarity of purpose and a moral compass is essential, as is an understanding that huge new disruptions are the hallmark of the new normal.



Quote for the day:


"Open Leadership: the act of engaging others to influence and execute a coordinated and harmonious conclusion." -- Dan Pontefract