Daily Tech Digest - October 13, 2018

Of the survey respondents who report a blockchain project in the pilot stage, 54 percent say the effort sometimes or often hasn’t been justified by the result. This should be a call to more effective action. To help executives answer that call, the report offers four strategies that can be used to build trust.  ... The participants in a blockchain ecosystem need to decide what the operating standards will be and what various users will be able to see and do. The design begins with the strategic business model, which includes making decisions about whether the blockchain will be permissionless, and thus available to everyone, or permissioned (having various levels of permissions). Permissions determine participants’ roles and engagement with the blockchain, which can vary from entering information or transactions to only viewing information. The choice of model isn’t automatic; organizations will decide based on design and use case considerations. They will also need to consider the type of network to establish. Forty percent of survey respondents report that they are using permissioned blockchains, 34 percent are working with permissionless chains, and 26 percent are taking a hybrid approach


How to put cybersecurity threats into a business context

Focusing on business impact is a different way to think about cybersecurity, and it requires a different mindset than that of tactically responding to cybersecurity threats. Cybersecurity used to be all about preventing attacks, and a breach either occurred or it didn't. "Now, most organizations understand that cybersecurity is not a problem to be solved but a risk to be managed," says Andrew Morrison, US leader of cyberstrategy defense and response at Deloitte & Touche. "Most of the market is acclimated to the fact that it's not longer if an attack will occur but when an attack will occur and how we will manage it. That entails a totally different mindset. "Risks, by nature, can be accepted, mitigated, or transferred," he says. ... A business-focused description of the same problem, however, might be that patching the vulnerability will reduce the probability of a breach to a particular database, which, if exposed, will cost a particular amount of money in lost business, fines and remediation expenses.


Big data processing techniques to streamline analytics


Addressing big data processing techniques requires innovative algorithms and programming, rather than simply adding hardware power. A solution widely used is indexing and partitioning the data to provide better access. GeoSpock's infin8 uses data indexing to process and organize data for subsecond data retrieval by ingesting and processing raw data at any scale, then creating an organized index that preserves every record of the original data set. Making the algorithms smarter has another interesting effect, too, allowing companies to reliably harvest data from images, video and audio that opens the door to new generations of applications that can "look and hear." These advancements let machines scan footage and tag the objects or people they detect. It can also be used as part of companies' intelligence-gathering arsenal. Artificial intelligence provides big benefits in this realm. Advancements in artificial intelligence require large amounts of data to operate properly, and these AI tools provide a better view of the data to see what parts of the data set are more useful and which parts have less value that can be deprioritized.


Why Business Leaders Shouldn’t Have Blind Faith in AI

San Marcos student Amaris Gonzalez takes a selfie with "Pepper" an artificial Intelligence project utilizing a humanoid robot. | Reuters/Mike Blake
Most machine learning algorithms are also bad at thinking about what Athey calls “what-if” scenarios. Like what would happen if a company were to change its prices, or if it hadn’t run a certain ad campaign. And here is where misguided faith in the accuracy of machine learning can become problematic in practice. Consider an algorithm designed to predict hotel-room occupancy based on observed prices, Athey says. It would look at historical occupancy rates and prices and draw the correct conclusion that the hotel is full when prices are high. However, if that predictive model was applied to optimize prices, it would lead to the conclusion that in order to get more people into your hotel, you should raise prices. “Which is of course wrong,” Athey says. “Just because higher prices are correlated with a full hotel doesn’t mean if you change your price you will sell more hotel rooms.”


Regulators can do more to encourage fintech innovation

A lot of work has been done in this area by the Consumer Financial Protection Bureau’s Office of Innovation and Project Catalyst, its predecessor, and by states like Arizona, which became the first state in the United States to adopt a regulatory sandbox statute. Yet these efforts, while welcome, fall short because they are largely focused on each agency’s policies and procedures and participant eligibility. Fintechs need more than process-oriented frameworks. To be successful, regulatory sandboxes require clearly articulated safeguards, terms of use and expectations on transparency. These matters are too important to be left to one-off negotiations. Regulatory sandboxes sound like a great idea, but what actually is a regulatory sandbox? In order for regulatory sandboxes to succeed, stakeholders need to have a common understanding — and acceptance — of the basic concept. First, regulatory sandboxes need a better name. Terms like “clinical trial,” “experiment,” or “lab” may better convey what is really needed.


This AI can predict your personality just by looking at your eyes

An exhibitor presents replacement puppet eyes at the Northern German puppet, teddy bear and miniatures fair in Hamburg February 14, 2010. Picture taken February 14, 2010.  REUTERS/Morris Mac Matzen (GERMANY - Tags: SOCIETY BUSINESS)
The project used artificial intelligence to track and monitor the eye movements of 42 individuals using tools from SensoMotoric Instruments. Those findings were then cross-checked with well-established questionnaires that define personality traits. Of the five key traits – openness, conscientiousness, extraversion, agreeableness and neuroticism – the technology easily identified four: neuroticism, extraversion, agreeableness and conscientiousness. The 42 people were fitted with an eye tracker and given five Australian dollars and 10 minutes to make a purchase in a university campus shop. When they returned they removed the eye tracker and filled in personality and curiosity questionnaires. The findings were analyzed to show how trait-specific eye movements vary across activities. While the study used a small sample and the authors said the predictions aren’t yet accurate enough for practical applications, it does shed light on the close link between personality and eye movements. Pupil diameter, for example, was important for predicting neuroticism.


Meet Your New Colleague: AI


How potential employees actually speak to AI is a different conversation than how potential employees should speak to AI, he added. That is, it’s unclear whether how a person treats a machine says anything about how that person would treat other people, and it’s unclear whether something like a person being rude to a machine agent should impact their job prospects. “We can certainly agree that we do care if it’s a human recruiting coordinator,” Mortenson said. But machines have no feelings or emotions and cannot be offended, so it would be easy to argue why employers shouldn’t care. Ultimately, “I do think we should care even if it is a machine,” Mortenson said. “I understand why we might care a little bit less, but I don’t think we can just discard that as a signal.” He gave the example of a report which found that this technology could have implications on how kids learn how to communicate and teach them that speaking harshly or impolitely to people has no consequences.


How digital technology is changing the world

Hitachi is working with major manufacturers on their digitisation journey, moving away from the conventional customer/supplier relationship and focusing on digital innovation through co-creation. This approach is already delivering results. Swedish ferry operator Stena Line, for instance, wanted to optimise many aspects of its operations, to reduce costs and inefficiencies such as excess fuel consumption. Hitachi gathered data from its ship’s operations and functions and used it to develop an AI algorithm that calculated an optimal way of steering Stena’s vessels and reducing fuel consumption. Mr Ramachander says: “We couldn’t have done that in isolation, without the shipping company. Co-creation is about working with our clients to solve their problems. In a move away from the traditional customer/supplier relationship, we are aiming to become their digital innovation business partner.”


Managing to the Next Century - The 5 Big Things For Agile Transitions


In the new agile world, it is neither possible to tell people to do a particular task or plan at the same level of breadth or depth. Work is defined, managed and executed by empowered teams who are not focused on the task, but instead the outcome they are trying to achieve. Quality including technical debt is treated in the same way that value is treated allowing the team and the business to make explicit, transparent decisions on trade-offs. But moving away from traditional managed work to a more agile approach requires more than managers stepping away ... At the very heart of the agile organization is a collection of teams, self-organized and empowered to make decisions. They have all the right skills to deliver value and are supported by an organization that fills in any gaps and helps them to get better. At scale that means teams of teams and the adoption of practices to ensure that dependencies are effectively managed.


Banking on artificial intelligence

Robot with a credit card. No, really.
Automation and handling masses of data is very valuable indeed but front-line services are also receiving attention and it is here, when married with human intervention, that excitement lies around the use of AI. The concept lies in being able to enhance the service provided to customers via virtual assistants, chatbots, robo-advisors and other analytical tools, all of which can be made more effective when machine learning and AI are applied. Providing better customer service is a good use for AI and something that all banks are focused on. Indeed, banks are commonly using chatbots and voicebots to interact with customers and solve basic problems without the need for human backup. Avika says: “Banks are using machine learning to improve customer engagement in order to increase customer satisfaction. For example, applying machine learning to unstructured complaints data can help a bank to group the complaints into categories, allowing them to tackle the areas that will have the biggest customer impact first. ...”



Quote for the day:


"You may be disappointed if you fail, but you are doomed if you don't try." -- Beverly Sills


Daily Tech Digest - October 12, 2018


The first step in reducing TCO is understanding what it is and why current solutions are driving it so high. A data protection TCO analysis should do what its name implies; calculate the TOTAL cost of ownership. For data protection this means adding up all the hard costs like data protection storage, the data protection network and data protection software. It should also include periodic costs like hardware and software maintenance (including support) as well as subscription costs like cloud storage or cloud compute. Calculating data protection infrastructure TCO also means adding up the operating costs associated with learning and operating the data protection system. Most data protection solutions are not self-service or designed for IT generalists; they need a well-trained administrator familiar with the infrastructure to interact with it. Operating costs are particularly important because certain complicated data protection tasks – like a full restore – will require a knowledgeable person to complete.



Taking Agile Transformations Beyond the Tipping Point

Not all leaders can make this transition. For example, one Asia-Pacific company undergoing an agile transformation replaced one-quarter of its top 40 leaders with individuals who better embodied agile values, such as collaboration and teamwork. Middle managers will also face challenges. Those who have grown up inside silos will need to learn how to manage cross-functional teams and delegate decision making to employees closer to the field. They may even need to return to doing the daily work rather than only managing other people. The coordination activities that consumed so much of managers’ time are increasingly handled within and between teams. While agile may be a fundamentally different way of working, many of the steps to become an agile organization are familiar to any executive who has gone through a successful corporate transformation. (See Exhibit 2.) The steps of committing, designing, preparing, and refining are variations of any large-scale change.


Detail of Dutch reaction to Russian cyber attack made public deliberately


The attackers used a rental car parked close to the OPCW building in The Hague. The hackers then attempted to use Pineapples to break into the WiFi network of the organisation. Pineapples are devices usually used for intercepting network traffic. The hackers were also caught using antennas and signal amplifiers, and other equipment the MIVD considers “specifically used during hacking operations”. During the operation, the MIVD found laptops with extra batteries (which the MIVD said were purchased in the Netherlands), and mobile phones with 4G connectivity, which the hackers tried to destroy during their arrest. Eichelsheim reiterated that the excuse the Russian might’ve simply been on holiday won’t fly. “They were caught with very specific equipment, entered on diplomatic visas, and were found carrying €20,000 and $20,000 in cash. That’s not a holiday.”


A Day In The Life Of Ms. Smith: How IoT And IIoT Enhance Our Lives

Ms. Smith walks out of the building. An RFID reader at the door scans her badge as she walks past it. Computer vision sees her approaching the exit and walking into the parking lot. The drive home is much like her drive to work. Computer vision devices on the road monitor and control traffic signals. Her ride home is slow—but again, she misses most of the red lights. Fifteen minutes before she gets home, the thermostat automatically turns on the heat (or cooling) so that the temperature is comfortable when she comes in the door. Finally at home, she walks inside, and the lights turn on. To relax, she turns on the TV, and the lights in the room automatically dim, making it easier for her to watch her favorite show. As she’s ready for bed, she says, “Turn down the lights,” to her digital assistant. “Oh, and wake me up at 5:30,” she says. “No, make it 6.” Lights in the other parts of her house dim, the lights in her bedroom slowly fade, and so does Ms. Smith.


5 CRM trends for 2018

5 CRM trends for 2018
Applying machine learning to CRM data has been a difficult process for most organizations. To do this traditionally you would need machine learning expertise on staff, developers and the drive to build the solution. Alternatively, you would have to build and maintain integration between your CRM system and an external machine learning service. That’s starting to change. “Machine learning is now built directly into CRM products,” explains Julian Poulter, research director for CRM and CX (customer experience) at Gartner. “We have seen about 30 use cases applying machine learning to CRM, but industry adoption is slow so far. The use cases include recommending alternative products, lead scoring and ecommerce recommendations.” That means the kinds of product recommendation features offered by Amazon and other ecommerce providers are within reach of many more organizations. But that’s not the only way machine learning can help.


Spinnaker is the Kubernetes of Continuous Delivery

Despite its humble and slow start, Spinnaker is enjoying widespread adoption. Today, Spinnaker is backed by industry leaders like Microsoft, Google, Netflix, Oracle and so on. It’s supported by all major cloud providers, including but not limited to, AWS, Google Compute Platform, Microsoft Azure and OpenStack. Spinnaker users include big names like Capital One, Adobe, Schibsted, LookOut and more. There is a growing vendor ecosystem around it which includes players like Mirantis, Armory and OpsMx. ... There were roughly 400 people at the event, representing over 125 companies and over 16 countries. During the Summit, the community announced the governance structure for the project. “Initially, there will be a steering committee and a technical oversight committee. At the moment Google and Netflix are steering the governance body, but we would like to see more diversity,” said Steven Kim, Google’s Software Engineering Manager who leads the Google team that works on Spinnaker.


Anomaly detection methods unleash microservices performance

AKF cube diagram
A symptom-manifestation-cause approach involves working back from external signs of poor performance to internal manifestations of a problem to then investigate likely root causes. For example, the symptom of increased response times can be tracked to the internal manifestation of excess latency in message passing between the app's services, which occurred because of a failing network switch. Other potential root causes exist for those same symptoms and manifestation, however. For example, an application design using overly large message requests, or too many small messages, would cause the same issue. These root causes would be found by different tools and resolved by different people. Change-impact analysis creates broad categories that lump together changes in component-level metrics based on their effect on external performance measures. These metric categories might include network link latency, database queue depth and CPU utilization, grouped according to assessments such as excessive resource usage, cost overages or response time.


Unlock distributed analytics with a microservices approach


Combining BI and analytics software with a microservices approach enables average end users to drill down into data with specific types of queries. When it comes time to visualize that data, organizations must decide whether to build customized visualization tools in-house or adopt a third-party option. A vast number of options exist for visualization, which include web-based platforms and stand-alone, open source tools. These tools tend to focus on a range of data interaction, from complex depictions of near-time data to simple renderings. However, big data sources have their limitations. Streaming and unstructured data sources present challenges that mainstream analytical tools struggle to depict. For example, some query connections won't accept data set blending, which limits exploratory analysis. Teams may also encounter system timeouts, out-of-memory exceptions, long query waits and rendering limitations. However, distributed analytics approaches can excel in big data.


Digital transformation in 2019: Lessons learned the hard way

Because of the focus on the technology components, the people-side of the changes required for digital transformation often go under-addressed, yet arguably are the key success factors. That's because the people in the organization have to carry out the digital transformation, yet are often inadequately equipped to do so from a skill, culture, mindset, inclination, and talent perspective. Many organizations have had their digital change initiatives crash upon the shoals of insufficient human capability to carry them out or an inadequately enabling environment. Currently, lack of appropriately skilled personnel ranks in the top five obstacles to digital transformation and is reported by 39 percent of orgs. The good news is that improved organizational focus and improved techniques for upskilling workers to support digital transformation have been arriving. Expect to see both more in 2019. The smart digital leader will use the resources of HR's L&D department to help drive them.


Multicloud does not eliminate vendor lockin

Multicloud does not eliminate vendor lockin
You might think you can avoid the trade-off by using containers or otherwise writing applications so they are portable. But there is a trade-off there as well. Containers are great, and they do provide cloud-to-cloud portability, but you’ll have to modify most applications to take full advantage of containers. That could be an even bigger cost than going cloud-native. Is it worth the avoided lockin? That’s a question you’ll need to answer for each case. Moreover, writing applications so they are portable typically leads to the least-common-denominator approach to be able to work with all platforms. And that means that they will not work well everywhere, because they are not cloud-native. I suppose you could write portable applications that are cloud-native to mutiple clouds, but then you’re really writing the application multiple times in advance and just using one instance at a time. That’s really complex and expensive. Lockin is unavoidable. But lockin is a choice we all must make in several areas: language, tooling, architecture, and, yes, platform.



Quote for the day:


"Leadership cannot just go along to get along. Leadership must meet the moral challenge of the day." -- Jesse Jackson


Daily Tech Digest - October 11, 2018

No company seems to safe anymore. In 2018 alone, we have seen the social media giant Facebook reporting data breaches twice affecting millions of users each time. As if this wasn’t enough, a couple of days ago Google reported exposing the data of more than 500K users of its social network Google+ between 2015 & March 2018. The ironical part is that Google reported no misuse of data but in response to this incident has decided to completely shut down the portal, Huh? Apparently, Google didn’t disclose this earlier citing fear of regulatory scrutiny. Wondering if we should still trust these tech giants with our personal data? European Data regulation like the GDPR is a step in the right direction in protecting customers’ data & these tech companies are now facing multi billion dollar lawsuits. On the side note, the Crypto industry is facing a similar situation with more than $927 million worth of digital money stolen to date this year — 3.5 times more than 2017.


The benefits of IAM processes, strategies for digitized companies


"Companies are using more and more systems than they ever have before. They're collecting more data, [and] the employees' job roles are changing faster," he said, adding that identity access management sits at the nexus of all those dynamics. Consider how an employee may require access to specific data or certain applications to work on a project, but will not need that access on an ongoing basis, he said. IT should be capable of changing access rights of not just that employee, but dozens, hundreds or even thousands of employees, as needed. However, not all organizations are maturing their IAM practices, Maxim said. "There are still a lot of companies that are doing very little with IAM -- they're working on spreadsheets, or they've reached a limit to what they could do with their homegrown systems," he said. However, he noted that many of them are "actively looking to find ways to streamline what they're doing."


Successful data-driven companies must balance human and machine roles

The anticipated redistribution of work between humans and machines may displace 75 million jobs, but it’s likely to create as many as 133 million new ones, too, according to the report. This major shift in jobs may not reassure those of you who believe technology is a threat to your role. But the reality is that smarter technologies provide an amazing opportunity to focus on the ways that we create the most value for our organizations. Creativity and strategic thinking remain distinctly human advantages. When paired with the increased processing capacity of machines, there is plenty of room to be optimistic about the future. Understanding the strengths of people vs. machines We are far from a reality where we can trust machines to make business decisions with human-like judgment and contextual understanding. Today, we trust machines to automate tasks and analysis in areas that are heavily parameterized and minimally risky. 


Disaster Recovery: Data Center or Host Infrastructure Reroute


Regardless of which approach you take, even if everything works flawlessly, you still need to address the ‘brownout’ phenomenon or the time it takes for services to be restored at the primary or to a secondary location. It is even more important to automatically send people to a different location if performance is impaired. Several people have heard of GSLB, and while many use it today, it is not part of their comprehensive DoS approach. But it should be. If your goal with your DDoS mitigation solution is to ensure an uninterrupted service in addition to meeting your approved performance SLA; then dynamic GSLB or infrastructure based performance load balancing has to be an integral part of any design. We can deploy this technology purely defensively, as we have traditionally done with all DoS investments or we change the paradigm and deploy the technology to help us exceed expectations. This allows us to give each individual user the best experience possible.


Suspected NASA Hacker Busted After Boasting About Exploits

The suspect was identified after a year-long investigation by the Polizia Postale - Italy's postal police - via its CNAIPIC group, which since 2008 has served as the national anti-crime computer center for the protection of critical infrastructure. It regularly investigates cybercrime. After identifying the suspect, police say they executed a search that resulted in the seizure of computing devices, which have tied the suspect to attacks against at least 60 Italian websites. In addition, rather than just being a member of the "Master Italian Hackers," the suspect appears to have been one of its leaders, authorities say. The Italian suspect is the latest in a long list of admitted hackers whose "too much information sharing" habits got them in trouble. To pick just one example: Last year, Russian-born Alexander Konstantinovich Tverdokhlebov, who emigrated to the U.S. in 2007, later becoming a naturalized citizen, pleaded guilty in U.S. federal court to having been "an active member of several highly exclusive Russian-speaking cybercrime forums."


SoftBank has a lot to worry about if it strikes this deal with WeWork


It’s very possible that the talks for SoftBank Vision Fund to invest up to $20 billion into WeWork will fail, Recode was told. Here are some of the hazards that could trip up either side over the next few weeks. The Vision Fund’s single biggest outside investor, the Saudi government, which holds a 45 percent stake, is under increasing political scrutiny after allegations it is behind the disappearance of U.S.-based Washington Post journalist, Jamal Khashoggi. Backing from foreign governments has always loomed as a major liability for venture capital investors. The SoftBank-Saudi ties are not new. But the Khashoggi revelations make it particularly bad timing for a deal, as WeWork could face reputational risk for taking money from a government that’s embroiled in such a high-profile human rights case. “If all that’s alleged is true, WeWork will be in bed with a regime that has expressed brazen disregard for virtually any norm of international politics,” said Chris Meserole, a foreign policy fellow at The Brookings Institution.


3 things you should do to prevent cyber attacks

The threat landscape is constantly evolving, with cyber criminals always looking for new exploits and studying one another’s tactics. As soon as a particular exploit proves successful, crooks the world over will adopt and refine it.  The majority of successful attacks come in the immediate aftermath of the popularisation of a particular attack method. That’s because its success is predicated on the fact that many organisations are vulnerable to it. Once the trend becomes common knowledge, organisations learn how it works and address it.  You can greatly minimise your chances of coming under attack by staying informed about growing trends. There are many ISACs (Information Sharing and Analysis Centres) that you can use to gather real-time threat intelligence.   When it comes to addressing new attack methods, processes and policies are relatively resilient and will perhaps only need to be tweaked. You are much more likely to need to update your software and web applications.


Automate everything or get left behind

Automate everything or get left behind image
Discovery and auto-monitoring. Sophisticated monitoring solutions use an increasing range of methods, including direct access to hosts via SSH and indirect access via configuration repositories like ActiveDirectory and services like Windows Discovery, to extract facts from existing infrastructure and speed up monitoring configuration by operators. Leading-edge products are now moving towards automating the process completely: creating comprehensive maps of infrastructure, apps, and complete business services and monitoring these things without the need for any manual intervention or direction. Alert processing, notification, escalation, integration. Alerting is, of course, a powerful form of automation. It entails decision-making, which may be simple or significantly more complex (e.g., several metrics, from separate systems, have entered states predictive of a particular kind of known failure for a critical business service). It involves sophisticated assignment and escalation based on issue, team rotas, time/date and other variables.


SD-WAN Adolescence Is About Interoperability and Scalability

SD-WAN-Adolescence-Is-About-Interoperability-Scalability
Almost everyone (enterprises, CSPs, standards bodies, vendors) I spoke with acknowledges that data-plane interoperability in SD-WAN is unlikely in the near future. Enterprises and CSPs are telling me they don’t need it yet. This brings back memories of the old IPsec interoperability wars. Trying to create interoperable meshes of nodes from disparate vendors today is really putting the cart before the horse because we need to start from the control plane. CSPs that are in the process of building or customizing their orchestration systems to integrate with SD-WAN offerings say that having interoperability at the control and management level allows for coordination between multiple SD-WAN domains. It also makes switching vendors feasible with much less pain. The present efforts focus on interoperability at the northbound API level. They govern the APIs used to provision and control SD-WAN deployments.


Security warning: Attackers are using these five hacking tools to target you

Perhaps the most potentially damaging of the dangers detailed in the report are remote access trojans - malware which is secretly installed onto an infected system providing a backdoor to observe all activity and enabling the attacker to carry out commands which lead to data being stolen. The particular example given in the report is JBiFrost, a trojan typically employed by low-skilled cyber criminals but with the capability to be exploited by state actors. What makes JBiFrost so potent is that it is cross-platform, with the ability to operate on Windows, Linux, MAC OS X and Android. Often delivered via a phishing email, it allows attackers to move across networks and install additional software. This particular RAT is publicly available and the cyber security agencies said they have observed it being used in targeted attacks against critical national infrastructure owners and their supply chain operators.



Quote for the day:


"The level of morale is a good barometer of how each of your people is experiencing your leadership." -- Danny Cox


Daily Tech Digest - October 10, 2018

security threats and vulnerabilities
Underlying all this likely nonsense is the obvious fact that almost every computer chip in the world is made outside of the U.S., often in Asian locations. I used to laugh when I was told that I couldn’t bring my Lenovo laptop in, but I could bring in my Dell laptop, which itself was full of nothing but Asian-made chips. If you are worried about supply chain threats, and you should be, it’s not just one little purported spy chip you should be worried about. You can’t find a computerized device in the U.S. that doesn’t have foreign-made chips. There isn’t some secret U.S. government agency that goes around inspecting all those chips for security holes or backdoors before they get put into all our computers. To me it is a hilarious idea that the Chinese would have to insert a specialized, tiny spy chip when it would be far easier to put an intentional weakness or backdoor into any of the hundreds of chips that are used in every computer on the planet. It would be far easier to hide in the weeds than to create a dedicated spy chip that any hardware expert would notice and question.



Overcoming the top obstacles to digital transformation success

You should begin developing a solid digital transformation strategy by first establishing a small, integrated governance team with equal representation and influence from the business and IT, including security. The governance team will enable a clear line of communication between digital and legacy IT teams and ensure initiatives are synchronized so appropriate investments are made to harden core systems while securely exposing functionality that enables digital initiatives. While security was not cited as one of the top three barriers, it remains a concern. With the highly-fragmented state of data across most enterprises today, exposing data sources to new digital systems creates yet another opportunity for attack. IT and security are integral to governance to limit risk exposure as new digital capabilities are introduced. As you launch digital initiatives, especially if you are behind the digital curve, partner with digital leaders who can provide the capabilities you need to get your products to market securely while you continue learning and developing internally.


IT departments struggle to balance innovation with everyday IT operations


“Organisations have become acutely aware of the critical role technology now plays in overall business strategy, from enabling a more productive and connected workforce to increasing market share and customer loyalty,” she said. “The Insight Intelligent Technology Index signifies how competing demands on IT are inhibiting their ability to plan and innovate.” The index, which queried 200 IT professionals, also found 79% of IT decision makers felt there were not enough resources to effectively support the demand for innovation, with another 33% saying innovation was expected of them despite existing processes, practices and business operations not evolving in ways that allowed them to do so. Another 30% cited a lack of clearly defined roles and responsibilities in the organisation as a reason for the lack of innovation.


CEO Fraud: Barriers to Entry Falling, Security Firm Warns

To hide their efforts, attackers may alter the rules for a compromised email account to divert copies of their fraudulent messages - and potentially replies - to other, attacker-controlled accounts, Digital Shadows notes. Such fraud can take the form of false invoices or modifying legitimate ones, but swapping in details for accounts controlled by attackers. Because BEC scams typically exploit weak corporate controls, organizations can use many actions to better defend themselves, Digital Shadows says. One of the most basic steps is to ensure that email accounts always have two-step verification enabled. That at least prevents an attacker that has the login credentials from accessing the account. Controls around wire transfers can also be shored up, Digital Shadows says. Fraudsters have had success, for example, by compromising the email account of a CEO and then sending an email to the finance department saying a payment needs to be made.


Discovering Blind Spots in the Data


Usually, there’s a trade-off between precision and recall. Improving precision can drop the recall and vice-versa. It’s up to the business stakeholders to tell the data scientists, which is more important: identifying more actual escalations at the cost of having more false escalations classified as escalations (high recall, low precision)? Or minimizing false escalations at the cost of missing many actual escalations (low recall, high precision)?  If the business stakeholders go for high recall and low precision, they will need to engage more people to deal with a higher number of real escalations and possibly many false escalations. If they choose low recall and high precision, they can engage fewer people to deal with the escalations but will risk having model miss many real escalations. In our case, initially, the Business stakeholders preferred high precision over low recall so that they didn’t have to deal with a lot of false escalation alerts. Our dataset had a few features whose value changed with time. This introduced us to a phenomenon called signal leakage.


Why 60% of IT security pros want to quit their jobs right now

retain.jpg
The main reasons cited by the IT pros who wanted to leave were job dissatisfaction and the lack of growth opportunities within their companies, said the release. The survey gathered data from more than 9,000 IT security professionals and decision-makers in the enterprise, said the release. This survey could give businesses better insight into how to retain and support their current tech talent. Other top reasons for employees looking to quit include unhealthy work environments (53%), absence of IT security prioritization from executives or upper management (46%), unclear job expectations (37%), and lack of mentorship (30%), said the release. Buy-in from upper management is crucial for security efforts, since only 38% of CEOs are really engaged in cybersecurity. This low engagement percentage is proof that executives don't prioritize cybersecurity as much as other factors of business, which further validates the dissatisfaction IT professionals are feeling.


NASA is using HoloLens AR headsets to build its new spacecraft faster


In the headset, the workers can see holograms displaying models that are created through engineering design software from Scope AR. Models of parts and labels are overlaid on already assembled pieces of spacecraft. Information like torquing instructions—how to twist things—can be displayed right on top of the holes to which they are relevant, and workers can see what the finished product will look like. The virtual models around the workers are even color-coded to the role of the person using the headset. For Jory’s team, which is currently constructing the heat shield skeleton of Orion, the new technology takes the place of a 1,500-page binder full of written work instructions. Lockheed is expanding its use of augmented reality after seeing some dramatic effects during testing. Technicians needed far less time to get familiar with and prepare for a new task or to understand and perform processes like drilling holes and twisting fasteners. These results are prompting the organization to expand its ambitions for the headsets: one day it hopes to use them in space.


Why today's containers and microservices will be tomorrow's legacy sooner than you think

The industry will be stuck with container platforms because these are interesting technologies that give the operators a taste of the power of running massive jobs at scale. Unfortunately, the ROI of maintaining that platform is elusive, since very few companies running these platforms will ever reach a point where they can even optimize job scheduling, and the cost of maintaining the container platform itself competes with the modest improvements in the developer's user experience. A similar phenomenon was seen with OpenStack half a decade ago, when, in the rush to have an in-house cloud, many companies grossly underestimated the short- and long-term associated costs and are now stuck maintaining OpenStack in perpetuity for the sake of the unnamed applications running on top of it. Well, that's a depressing thought, isn't it? But true. And why? Well, because technology change is hard. 


“Given the way the data was captured and displayed, it would not be readily available or searchable, but [the information commissioner] considers that a motivated individual could locate and extract the data in a more permanent way,” the notice said. Although the USB stick contained more than 1,000 files overall, just 1% of this information could be classified as being personal in nature. Also, a subsequent investigation by the ICO revealed less than 2% of the airport’s 6,500-strong workforce had received data protection training.  “Given that Heathrow Airport is Europe’s busiest airport, where high-level security should be inherent, loss or unauthorised disclosure of personal data of staff could have presented a greater risk if found by individuals who had not handled the data responsibly,” the penalty notice said. “Taking into account all of the above, the commissioner has decided that the penalty is £120,000.” According to the report, the USB stick was found in Kilburn, west London, on 16 October 2017, before being handed in to a national newspaper 10 days later


Behavioral Biometrics: Key Challenges

As more companies move away from passwords toward behavioral biometrics, they face new challenges, says Rajiv Dholakia, vice president products at Nok Nok Labs, a company which is into next generation authentication. Behavioral biometrics relies on a behavioral trait of an individual, rather than a physical trait. Examples include speech patterns, signatures and keystrokes. "There are no standards as such in this area on how the information is collected, how it's stored and how it's processed," Dholakia says in an interview with Information Security Media Group. "And therefore, there may be some privacy hazards associated with the technique unless a manufacturer makes it super clear exactly what is being collected, how it's being processed and whether that profile data is anonymized." Other behavioral biometrics issues include accuracy and concerns about passive collection of information from users, he says. "Moreover, when you are using behavioral biometrics, you have to be super certain that the information coming from all sensors is coming from a real device as opposed to a virtual machine," he says



Quote for the day:


"He who cannot be a good follower cannot be a good leader." -- Aristotle


Daily Tech Digest - October 09, 2018

NoOps Future of Cloud Networking
Recent technology advancements in cloud networking point the way toward a NoOps approach to cloud networking, meaning the automation of processes that now depend on direct control by human networking experts. One advancement is that the as-a-service revolution is finally reaching cloud networking infrastructure. What began with virtualized networking hardware has more recently progressed to software-defined (SD) cloud routing, enabling distributed, heterogeneous networking infrastructure to traverse public cloud, on-premises, hybrid cloud and multicloud environments. SD cloud routing centralizes and automates networking functions that previously required hands-on, time-consuming attention by highly certified human experts. As a result, SD cloud routing shifts networking infrastructure control directly to CloudOps and DevOps engineers, who are no longer dependent on networking professionals to establish or maintain their cloud networking infrastructure.


8 key elements of an effective disaster recovery plan

As the Southeast U.S. continues to recover in the aftermath of Hurricane Florence, it’s important for IT leaders to consider the effect hurricanes and other natural disasters have on healthcare information needs - both now, in the future and before the next challenge strikes. The sobering fact is more than half of organizations (58 percent) are not ready for a major loss of data. In fact, 60 percent will go bankrupt within six months, according to data from Washington, D.C.-based research firm Clutch. ... In order to prepare for a disaster, organizations need a strong DR plan and must be willing to go beyond above and beyond it in implementation. Not only should a company build its core processes into a DR plan and have a team that is designated for DR tasks but it should also perform a risk assessment to best determine what challenges might arise and how dangerous each of those elements is. One way to do this is through security penetration testing, in which an organization tests its system’s security by trying to exploit its weaknesses. Since disaster recovery is extremely important in healthcare complianceand other regulatory industries, it is best to also incorporate compliance into security and DR planning.


Cooperation vital in cyber security, says former Estonian minister


Looking to the future, Kaljurand said states should come together and continue the UN GGE process. “But they have to change the process. If they want the process to be serious and respected, if they want it to be adopted by a wider number of states, [the process] has to be open, transparent and inclusive.” The challenge for states and governments is to find ways of cooperating so that those who want to contribute will have the chance to be part of the process, said Kaljurand. “The UN has to lead by example and say that ‘multi-stakeholder’ means all stakeholders: governments, businesses, industry, civil society, academia and the technical community,” she said. Within the context of the UN, Kaljurand said Western democracies should be much more active in promoting their understanding of the use of information and communication technologies and how technology can change countries in terms of economy, governance, people, education and awareness.


Scaling your developer community with plugins

29% of Foreman survey respondents want help to contribute
Scale the user community, and the developer community should grow too. But, how? Attracting users isn't easy either—there's a reason we have marketing. Unless you have a big budget for events, materials, adverts, etc., scaling the user community isn't much easier than scaling the developer one. So we have limited options for attracting users, and virtually none for the developers. What are we left with? Well, for ideas we could look to the ways users become developers. You'll always have a few people that were destined to become contributors—the right mix of domain knowledge, interest/drive, and programming skill. But there's a larger group who have an itch to scratch but perhaps aren't so confident in diving into the full code base. These are the people we need to target. As evidence, here's some data from the most recent Foreman Community Survey (the community I work in) showing nearly one-third of the 160 respondents would like to contribute but don't know where to start.


Wi-Fi 6 is coming to a router near you

The basic technology behind Wi-Fi 6, which is still known as 802.11ax on the technical side, promises major advances beyond just higher data rates, including better performance in dense radio environments and higher power efficiency. Wi-Fi 6 is also seen as a possible communications method for internet-of-things (IoT) devices that have low power capabilities and limited battery life. Thanks to a feature called target wake time, Wi-Fi 6 IoT devices can shut down their Wi-Fi connections most of the time and connect only briefly as scheduled in order to transmit data they’ve gathered since the last time, thus extending battery life. Farpoint Group principal and Network World contributor Craig Mathias said that, given the degree to which consumerization is the driving force even behind enterprise IT these days, the re-naming is probably a step in the right direction, but that doesn’t mean that simply labeling 802.11ax as Wi-Fi 6 tells the whole story.


Mapping the Market for Agile Coaches


By no means did we arrive at definitive answers but what we learned is an important contribution to the field. In fact, we believe that this article contains more information on agile coach compensation than has ever been available in one place. In gathering this information, we focused on using only the highest quality, verifiable data. As such the data below comes from only three sources: Agile coaching positions that the five agile coaches who participated in the meeting have had, mostly over the last two years; Positions that the authors have been offered, mostly over the last year; and Information that close, trusted friends of the authors have provided about their coaching positions. ... When extrapolating this information to other situations we caution: The authors are mid-level agile coaches and above. As such, entry level positions are likely to be under-represented; and The authors are located in the greater San Francisco Bay area, one of the most expensive areas in the world. Although not all of the positions are based in this geography, most of them are.


Below the Surface, Microsoft is not the new Apple

When Surface was announced, there was much debate around whether Microsoft could break the pattern of doom when OS licensors compete with their licensees. The company has largely avoided conflict by focusing its device portfolio to compete most directly with Apple's and by embracing Surface Pro-like products from other PC manufacturers. Surface's success has likely emboldened Google and Amazon to produce their own devices while continuing to seek broad licensing. One could even argue that Surface has provided more incentive for Microsoft to step up efforts such as its own stores and retail areas within Best Buy that also feature licensees' PCs. But think about that quip about the Surface not being a PC. What does that say about the merits of other products that are PCs and bound by the same version of Windows? The contrast between the treatment received by Surface and licensees was not pretty when, at its 2017 fall education event, Microsoft introduced the premium Surface Laptop, while the third-party announcements were focused on low-margin, Alcantara-bereft laptops.


How can IT put Windows 10 containers to use?


To use Windows 10 containers with Docker, IT must enable Microsoft Hyper-V on the endpoints it plans to deliver the container to. Microsoft supports two types of Windows containers: Windows Server Containers and Hyper-V Isolation Containers. A Windows Server Container runs directly on the host and shares the host's kernel. Only Windows Server can host Windows Server containers. A Hyper-V Isolation Container runs in a highly optimized virtual machine, making it more secure than a Windows Server Container. Both Windows Server and Windows 10 can host Hyper-V Containers, which is why IT must enable Hyper-V on Windows 10 machines. Docker containers are well-suited for Agile application delivery scenarios, especially for applications based on a microservice architecture where the services within an app run separately from one another. IT can easily create and deploy Windows 10 containers on developer desktops and testing machines and then implement them in production deployments.


Intel, AMD both claim server speed records

Intel, AMD both claim server speed records
Even more impressive is that all systems tested included mitigations for the Spectre and Meltdown vulnerabilities. To mitigate those flaws in the CPU, some functions have to be disabled either in software or at the firmware level, and that can mean performance hits, sometimes significant hits. It shows Intel has a per-core advantage because its top Xeons are 28-core, whereas the AMD Epyc is 32-cores. And one benchmark shows Intel’s AVX512 extensions clobber Epyc, which enables floating point instructions and impacts compute, storage, and networking functions. So, it’s a big deal. The down side? They might be hard to get. Last week CFO and interim CEO Robert Swan published a letter saying that the company was suffering from a shortage of chips due to increasing demand, but Swan assured customers the company would be able to meet demand. The good news for data center operators is that the problem seems to be more on the PC side than server side.


Four critical KPIs for securing your IT environment

So, what should you be measuring when it comes to your security program? As the old saying goes: If you can’t measure it, you can’t manage it. Here are four Key Performance Indicators (KPIs) that can help enterprises navigate the murky waters of cybersecurity and reduce anxiety surrounding the possibility of cyber attacks. ... One practice that is key to this KPI is patching, so be sure to document patch cycles. However, some assets like industrial control systems, stamping presses or systems for other industrial uses may not be able to be patched. Many times, the manufacturer of the equipment will not support an updated operating system. In the case that patching it is not an option, the next best step is to use application whitelisting on the asset, which ensures that it will function as a fixed purpose device. That being said, patching in and of itself is not a silver bullet: There are still many assets in which neither of these options is feasible. If that is the case, the only option is isolation—and isolating an asset in its own network segment, in many cases, is the only way to enhance security.



Quote for the day:


"One of the sad truths about leadership is that, the higher up the ladder you travel, the less you know." -- Margaret Heffernan