Daily Tech Digest - March 28, 2019

The firm's Risk in Review study said when risk management is at the top of its game, "leaders have a clear line of sight into threats for informed decision making." The report is based on a global survey of 2,073 CEOs, board members, and professionals in risk management, internal audit, and compliance, conducted in October and November 2018, and described six habits risk functions follow that help their companies set a course for sustainable growth. Digital transformations don't work well in isolation, the report said, because of the many connection points that can be exploited without proper controls. A well-thought-out and communicated digital strategy with growth targets and values anchors a risk culture. As organizations go all-in with transformations, the entire organization should prioritize items such as new technology, while risk functions set controls that map back to the strategy. In another survey recently conducted by the firm, CEOs globally said they expect the artificial intelligence (AI) "revolution to be bigger than the Internet revolution."


UK IoT research centre to tackle cyber risk


The centre’s research focus will be on the opportunities and threats that arise from edge computing, an innovative way to collect and analyse data in machine learning and artificial intelligence (AI) technology. When implemented successfully, edge computing can improve network performance by reducing latency, which is the time taken for data to traverse a system. “The centre’s ultimate aim is, by creating a trustworthy and secure infrastructure for the internet of things, to deliver a step change in socio-economic benefit for the UK with visible improvements for citizen wellbeing and quality of life,” said Jeremy Watson, Petras director and professor at University College London department of science, technology, engineering and public policy (STEaPP). “I expect productivity improvements and cost savings across a range of sectors including healthcare, transport and construction. In bringing together academics, industry technologists and government officials, our research will create accessible and relevant knowledge with clearly visible economic, societal or cultural impact that will help to cement the UK’s position as a world leader in this area.”


5 steps employers can take to retain project managers

istock-916423726frustrated.jpg
Many project managers recognize their impact on the overall morale of the company and understand the need to remain positive and put on a "good face" for their teams, sponsors, and other stakeholders. The trouble is, employers may make the assumption that what they see is what truly exists, and this can create a sense of complacency. As an employer, it's important to keep in regular contact with your project management professionals to ensure that there exist no issues impacting their job satisfaction. Although your project managers are likely to remain the continent professional and push through to ensure that their projects are executed successfully; they could experience concern in some areas, yet not feel supported enough to say anything. In fact, many project managers experience a great deal of responsibility to put the needs of others ahead of their own, sometimes to their own detriment. Take the time to regularly sit down with your project management professionals and keep up-to-date with the issues that impact their level of job and employee satisfaction.


Mashreq Bank’s Lean Agile Journey

Snowdon stated that the goal of agile was to work in a more collaborative way, to get decisions closer to the customer, and to provide a better structure so that they could more quickly respond to the customer-driven demand, rather than push products/services at them. Capaldi stated, "I believe in kaizen and kaikaku as central concepts all companies must value; I will therefore only get involved in a transformation if I see these. In this case I was pleasantly surprised by the passion Steve and his team had in wanting to fully understand agile and these concepts were clearly there, and the fact they also come from a lean background just like me helped." "The head of the division was also massively behind the transformation and we very quickly agreed the metrics that would track progress," Capaldi mentioned. He said that agile is a journey; he prefers to challenge his clients in that they aren’t really trying to be agile, and that it’s ok to start with "fake agile". Fake it till you make it!, he said


Mind the overlap between GDPR and ePD, warns privacy lawyer


According to Ustaran and Campion, as the digital economy progresses, European data protection law is likely to lead to a more harmonised approach to its interpretation and enforcement, as reflected by the EDPB’s opinion. However, the situation going forward it far from clearcut as the ePD was initially intended to be replaced by the proposed European ePrivacy Regulation (ePR) in May 2018, but then was expected to be implemented at some point in 2019 and now looks likely to take a little longer. “The whole e-Privacy Directive / forthcoming Regulation and GDPR debate is one of the most complex legal conundrums going on at the moment in this space,” Ustaran told Computer Weekly. “The recent EDPB opinion is very helpful in terms of understanding the regulators’ thinking, but where the e-Privacy Regulation fits in is a big missing piece,” he said. According to Ustaran, the e-Privacy Regulation is unlikely to be fully effective before 2020, given that the European Council has not decided on a preferred draft, which will then need to be discussed in detail with the European Parliament and the European Commission before being formally adopted.


Site reliability engineer shift creates IT ops dilemma


In some ways, the transitional struggle described by the SREcon attendee is unavoidable, according to experienced SREs who presented here this week. "If you talk to experienced veterans in the field, they might get a faraway look in their eye and say, 'Oh, yes, I remember that,'" said Jaren Glover, infrastructure ghostwriter at Robinhood, a fintech startup in Palo Alto, Calif. "A bit of this pain is par for the course." There are, unfortunately, no easy solutions to the problem, SREs said, though support from employers to hire new engineers and scale up site reliability engineer teams is crucial. "It's also a matter of prioritization," said Arnaud Lawson, senior infrastructure software engineer at Squarespace, a website creation company in New York, in an interview after his SREcon presentation on service-level objectives. "Even if 80% of the team is dedicated to firefighting, the rest can tap into automation to get rid of tedious work." At large enough companies, such as the professional networking site LinkedIn, SREs are sometimes repurposed from other teams to help those that struggle to meet team performance targets or who are overwhelmed by pager alerts.


Shared learning: Establishing a culture of peers training peers

Shared learning: Establishing a culture of peers training peers
“After you walk your teammates through how you apply a skill, let them test it out on their own to see whether they can repeat the process you used and achieve the same or a similar result,” he says. With so many organizations relying on technology for training, this hands-on aspect is key. “We’re moving from a world where just watching online tutorials and going to classes was enough to one that emphasizes experiential learning. Just knowing isn’t enough — it’s about doing,” Schawbel says. “If you’re lucky, your organization will give you access to learning, training, educational materials or subscriptions to various resources, but they aren’t actually providing the hands-on, peer-to-peer learning, mentorship, situational and project-based knowledge.” ... Once your coworkers have attempted to complete a task using the skill you taught them, review it, Schawbel says, but understand that nowadays, people don’t even like using the word “feedback,” and prefer “suggestions for improvement.” Here, the key is starting with the positive.


Understanding the role and need of a data protection officer

The DPO works alongside of the other C-suite officers at your firm and maintains Data Protection Authority rules and regulations. This means that they should be expert or well-versed in the GDPR and all of its requirements, but it also means that the DPO needs to understand other jurisdictional requirements around the world in places your business operates. This responsibility is a serious one, and you should review the information available at the International Association of Privacy Professionals (IAPP) for further clarity. The IAPP is the world’s largest information privacy community and provides comprehensive data privacy and regulatory certification training. Because you have gotten this far, you must believe that your business has opportunities to create value through your data and data partnerships. You have also certainly noticed the seemingly daily disastrous headlines about data breaches plaguing companies. There have been hundreds of different data breaches involving more than 30,000 records each; some of these breaches affected hundreds of millions of data subjects. 


Identifying exceptional user experience (UX) in IoT platforms

Industry 4.0 / Industrial IoT / Smart Factory
Enterprises should pick IoT platforms with superlative access to on-platform configuration functionality with an emphasis on declarative interfaces for configuration management. Although many platform administrators are capable of working with RESTful API endpoints, good UX design should not require that platform administrators use third-party tools to automate basic functionality or execute bulk tasks. Some programmatic interfaces, such as SQL syntax for limiting monitoring views or dashboards for setting event processing trigger criteria, are acceptable and expected, although a fully declarative solution that maintains similar functionality is preferred. ... In general, the UX should be focused on providing information immediately required for the execution of day-to-day operational tasks while removing more complex functionality. These platforms should have easy access to well-defined and well-constrained operational functions or data visualization. An effective UX should enable easy creation and modification of data views, graphs, dashboards, and other visualizations by allowing operators to select devices using a declarative rather than SQL or other programmatic interfaces.


How IoT can transform four industries this year

"Among providers, IoT enablement will be leveraged toward the triple aim of cost, quality, and population health," Khaled said. Simple, embedded digital tools are already being piloted at large scale to mitigate infection risk around replaceable medical instruments, while smart threads and sticker or patch sensors have improved in their fidelity, tracking everything from cardiac readouts to body chemistry and sleep patterns. Among payers, IoT presents a distinct opportunity to enable smarter population risk management and accompanying reimbursement rate adjustments. IoT-enabled, long-term care facilities will be able to negotiate better rates if their sensor data supports fall risk and infection likelihood mitigation, Khaled said. The growing ecosystem of wearable fitness devices will help insurers recognize members who are (literally) taking steps to actively change their individual risk. IoT technologies supporting patient medication adherence will help both of these groups see major cost-saving and health improvement opportunities.



Quote for the day:


"True success is a silence inner process that can empower the mind, heart and soul through strong aspiration." -- Nur Sakinah Thomas


Daily Tech Digest - March 27, 2019

5 things you can do in 5 minutes to boost your internet privacy


For websites and services where you need to ensure the security of your account, like your bank, passwords alone simply are not enough anymore. In this scenario, you need two-factor authentication (2FA) -- specifically, the kind where a mobile app generates login codes for you. Not the kind where you are sent an SMS text message, because those can be intercepted or just fail to arrive. With app-based 2FA, you log into an app or website like normal, then you open an app that generates a special six-digit code every 30 seconds. This authentication app is synced with the other app or service so that your code matches the one that the main app or service expects to get. You enter the code from the authenticator app into the app or website that's asking for it, and then your login is complete. Google makes its own free authenticator app for iOS and Android. Unfortunately, there isn't a standardized method for setting up your account with 2FA. Amazon, PayPal, eBay and your bank will all use slightly different systems and terminology.



Scaling Microservices: Identifying Performance Bottlenecks

A bottleneck is the point in a system where contention occurs. In any system, these points usually surface during periods of high usage or load. Once identified, the bottleneck may be remedied bringing performance levels into an acceptable range. Utilizing synthetic load testing enables you to test specific scenarios and identify potential bottlenecks, although this only covers contrived situations. In most cases, it is better to analyze production metrics and look for outliers to help identify trouble on the horizon. Key performance indicators from your application include request/sec, latency, and request duration. Indicators from the runtime or infrastructure also include CPU time, memory usage, heap usage, garbage collection, etc. This list isn't inclusive, there may be business metrics or other external metrics which may factor into your optimizations as well.


The devil is in the data, but machine learning can unlock its potential

The devil is in the data image
Effective data governance can enable intelligent real-time business decision-making that will, in turn, drive organisations in a more profitable direction. One of the best approaches when it comes to unleashing big data’s potential is investing in a data lake: a central repository that allows organisations to collect everything — every bit of data, regardless of its structure and format — which can then be accessed, normalised, explored and enriched by users across multiple business units to reveal patterns across a shared infrastructure. The advantage of this approach is that organisations can gain end-to-end visibility of the enterprise data and actionable business insights. The disadvantage is that the data has to be kept up to date, which takes time and effort. Another downside is the GDPR compliance and data security risks that are associated with depositing the entirety of an organisation’s business-critical data into a data lake.


Insights for your enterprise approach to AI and ML ethics

The promise of AI is in augmenting and enhancing human intelligence, expertise and experience. Think helping a aircraft mechanic make better, more accurate and more timely repairs – not automating the mechanic out of the picture. But the scope of what you can do is tempered by inherent limitations in today’s AI systems. I like to frame this as a recognition that computers don’t “understand” the world the way we do (if at all). I don’t want to get into an epistemological discussion about the definition or nature of understanding, but here’s what I think is a very illustrative and accessible example. One common application of AI is in image processing problems. i.e., I show the machine an image – like what you might take with your phone – and the machine’s task is to report back what’s in the image. You build a system like this by shoving in thousands or millions or even billions of images to an AI program (such as a neural network) – you might hope that somehow, as a result of processing all of these images the software builds some kind of semantic representation of the world.


Alibaba's UC Browser can be used to deliver malware


Dr Web researchers note that for now UC Browser represents a "potential threat" but warn that all users could be exposed to malware due to its design.  "If cybercriminals gain control of the browser's command-and-control server, they can use the built-in update feature to distribute any executable code, including malware. Besides, the browser can suffer from MITM (man-in-the-middle) attacks," the security company notes. The MITM threat arises because UCWeb committed the security blunder of delivering updates to the browser over an unsecured HTTP connection. "To download new plug-ins, the browser sends a request to the command-and-control server and receives a link to file in response. Since the program communicates with the server over an unsecured channel (the HTTP protocol instead of the encrypted HTTPS), cybercriminals can hook the requests from the application," explains Dr Web.  "They can replace the commands with ones containing different addresses. ... "


Deep Learning for Speech Synthesis of Audio from Brain Activity

In three separate experiments, research teams used electrocorticography (ECoG) to measure electrical impulses in the brains of human subjects while the subjects listened to someone speaking, or while the subjects themselves spoke. The data was then used to train neural networks to produce speech sound output. The motivation for this work is to help people who cannot speak by creating a brain-computer interface or "speech prosthesis" that can directly convert signals in the user's brain into synthesized speech sound. The first experiment, which was run by a team at Columbia University, used data from patients undergoing treatment for epilepsy. The patients had electrodes implanted in their auditory cortex, and ECoG data was collected from these electrodes while the patients listened to recordings of short spoken sentences. The researchers trained a deep neural-network (DNN) using Keras and Tensorflow using the ECoG data as the input and a vocoder/spectrogram representation of the recorded speech as the target.


An inside look at Tempo Automation's IIoT-powered ‘smart factory’
“There could be up to 20 robots, 400 unique parts, and 25 people working on the factory floor to produce one order start to finish in a matter of hours,” explained Shashank Samala, Tempo’s co-founder and vice president of product in an email. Tempo “employs IIoT to automatically configure, operate, and monitor” the entire process, coordinated by a “connected manufacturing system” that creates an “unbroken digital thread from design intent of the engineer captured on the website, to suppliers distributed across the country, to robots and people on the factory floor.” Rather than the machines on the floor functioning as “isolated islands of technology,” Samala added, Tempo Automation uses Amazon Web Services (AWS) GovCloud to network everything in a bi-directional feedback loop. “After customers upload their design to the Tempo platform, our software extracts the design features and then streams relevant data down to all the devices, processes, and robots on the factory floor,” he said. This loop then works the other way: As the robots build the products, they collect data and feedback about the design during production.



Using value stream management and mapping to boost business innovation

Value stream mapping purists may argue that the above exercise is not the real process because traditional components such as the time metrics, activity ratios and future state were omitted. Fear not, these components are included in a full-blown formal value stream mapping exercise. However, teams such as Thrasher’s have made substantial improvements from shorter versions of the exercise by making work visible. The net result is a compelling change in the right direction. Value stream management is the practice of improving the flow of the activities that deliver and protect business value -- and prove it. It’s a nascent digital-concept that measures work artifacts in real time to visualize the flow of business value and expose bottlenecks to optimize business value. A significant strength of this practice centers around how and where work is undertaken. This activity is captured through the work items mentioned above in the toolchain, providing a traceable record of how software is planned, built and delivered.


Redis in a Microservices Architecture

Redis in a Microservices Architecture
Redis can be widely used in microservices architecture. It is probably one of the few popular software solutions that may be leveraged by your application in so many different ways. Depending on the requirements, it can act as a primary database, cache, or message broker. While it is also a key/value store we can use it as a configuration server or discovery server in your microservices architecture. Although it is usually defined as an in-memory data structure, we can also run it in persistent mode. ... If you have already built microservices with Spring Cloud, you probably have some experience with Spring Cloud Config. It is responsible for providing a distributed configuration pattern for microservices. Unfortunately, Spring Cloud Config does not support Redis as a property source's backend repository. That's why I decided to fork a Spring Cloud Config project and implement this feature. I hope my implementation will soon be included into the official Spring Cloud release, but, for now, you may use my forked repo to run it. It is available on my GitHub account: piomin/spring-cloud-config. 


Data visualization via VR and AR: How we'll interact with tomorrow's data

virtualitics.jpg
Data visualization in VR and AR could be the next big use case for the technologies. It's early days, but examples of 3D data visualizations hint at big changes to come in how we interact with data. Recently, I spoke with Simon Wright, Director of AR/VR for Genesys, about one such experiment. Genesys helps companies streamline their customer service experience with automated phone menus and chatbots, for example, but in his role Wright has a lot of latitude to push the boundaries of Mixed Reality technologies for enterprise customers. "One of the things I'm personally excited about is the ability to create hyper visualizations," Wright tells me. "We capture massive amounts of data, and we've created prototypes to almost magically bring up a 3D model of Genesys data. This is where there could be huge opportunities for AR, which has advantages over a 2D screen." For one recent project, Wright and his team wanted to project data pertaining to website analytics onto the wall of a restaurant in a beautiful way. "It started as a marketing-led project," he explains.




Quote for the day:


"Leadership to me means duty, honor, country. It means character, and it means listening from time to time." -- George W. Bush


Daily Tech Digest - March 26, 2019

Firms urged to gear up for new malware and tactics as threats proliferate


As network defences increase in sophistication, so do the anonymity of attacks that now include the targeting of non-standard ports to ensure payloads are concealed upon delivery, the SonicWall researchers warned. Based on a sampling of more than 700 million malware attacks, SonicWall found that 19.2% of malware attacks used non-standard ports, up 8.7% compared with 2017. “The concern over security and privacy is more prevalent than ever before. Industry and government must collaborate to build a more secure environment, mitigate risk, and build citizen trust in government and consumer trust in business,” said Michael Chertoff, executive chairman and co-founder of The Chertoff Group, and former US secretary of homeland security. “This report provides critical analysis into the evolution of cyber adversaries’ threat tactics and methods. As organisations increasingly rely on metrics to understand and forecast risk, this intelligence will help enterprises and governments make informed decisions on their security investment.”



Maintaining security control in the age of the mobile workforce

The ability to do our jobs from outside the corporate walls keeps workers productive and helps businesses remain operational. Or in some cases, saves the organization travel fees – especially those caused by rescheduling or canceling hotels and airfare during inclement weather. Beyond the seasonal spikes, many organizations are adopting more flexible work policies. The number of U.S. mobile workers is expected to grow to 105.4 million, or more than 70 percent of the population, by 2020. The composition of the modern workforce is changing. Not to mention that, as an always-on society, we have a problem disconnecting. Fast forward from this brutal winter and 42 million people are expected to travel over Memorial Day weekend, with a majority of them still tethered to work communications on their devices. When only 11 percent of end users access business applications from the corporate office 100 percent of the time, the growth of the mobile workforce places a lot of strain on data security. Data now sits on endpoints spread around the globe.


'Operation ShadowHammer' Shows Weakness of Supply Chains


While the exact scope and purpose of installing these backdoors remains unclear, researchers say the APT group does appear to have targeted a very specific set of Asus PC users. These victims were identified through their MAC addresses. Specifically, the attackers hardcoded MAC addresses into the trojanized software samples recovered by Kaspersky Lab, which says this list was used to target specific machines and their users. Once the backdoor was installed on a victim's machine, it would signal back to a command-and-control server and then receive additional malware to plant in the PC, according to Vice Motherboard, which first reported the story. If the PC was not on the target list, the malware did not initiate a call to the C&C server. One reason why the operation continued for so long without being detected is that attackers used legitimate certificates, such as "AsusTeK Computer Inc.," as part of the trojanized updates, researchers say. The updater software was also hosted on legitimate domains.


Have we reached “peak” chief digital officer?


Our latest CDO study found that CDOs with a market-facing background had dropped to 18 percent, down from 39 percent two years ago, and that 41 percent of organizations had attracted CDOs with a solid technology background, up from 32 percent in 2016. And 28 percent had a strategy, business development, or consulting background — a surge from 2016, when just 21 percent had such credentials. Indeed, a key study finding was that a third of CDO positions saw turnover in 2018. This reflects the need for new skill sets and experience as digital transformation programs move beyond pilots in specific corners of the business to play a central part in everyday operations. Looking ahead, we think that as transformation becomes part of the core business, the next step will be for the CDO to disappear. Digital transformation will become the responsibility of every member of the executive team. We are not there yet. But where organizations have a CDO in place today, the priority should be to ensure that the person in this role still has the appropriate perspective and capabilities to move the digital transformation agenda forward, by embedding it both deeply within operations and at scale across the organization.


How blockchain is becoming the 5G of the payment industry

digital currency
On the heels of JPMorgan Chase & Co. creating its own stable coin token for use on blockchain distributed ledgers, IBM last week launched its Blockchain World Wire, which will enable banks to transfer tokens and cryptocurrency in near-real time, cutting out banking intermediaries and lowering capital costs and clearing fees. The distributed ledger technology (DLT) network will initially enable cross-border payments and settlements based on the Stellar protocol, a decentralized payment network that uses its own cryptocurrency, Stellar Lumens (XLM). While the IBM network will support XLM, it will primarily use stable coin backed one-for-one by the U.S. dollar and other national currencies. In other words, IBM will run the blockchain infrastructure – the computer nodes and software – and the banks will transmit digital tokens tied to fiat currency over the network. The Rizal Commercial Banking Corporation (RCBC), one of the Philippines' top 10 banks by assets, will be among the first of four using the Blockchain World Wire for remittance payments services. 


ProBeat: Google’s Stadia is all about the cloud and AI

Google Stadia Style Transfer ML
The cloud conversation is straightforward. Stadia is a cloud gaming service, after all. But what about AI? The consumer-facing part is of course Google Assistant. Google is positioning YouTube as Stadia’s killer app, but Google Assistant also has a role to play. The Stadia Controller has a Google Assistant button. When you get stuck, instead of scouring YouTube for the right walkthrough explanation, you’ll supposedly be able to push the Google Assistant button for help. How rudimentary or how complex this is will depend on how good the AI is. The developer-facing part is Style Transfer ML. Style transfer is a machine learning technique that recomposes images in the aesthetic of other images, letting game developers map textures and color palettes from artwork, photos, and still images to game environments automatically. Google needs to woo developers to Stadia, and it’s naturally leaning on its AI chops to do so. But like everything Google does, Stadia is really about data collection.


Artificial intelligence making major inroads into Russian banking


According to the Expert RA study, those obstacles include discrepancies with data in information systems, but once the issue of data consistency is resolved, finding qualified personnel to process data is set to be a major challenge. Industry insiders have already been complaining about difficulties in finding qualified personnel to operate AI-based solutions. “The main factors that are impeding the adoption and development of AI are shortages of qualified professionals and problems with the infrastructure of information systems,” said Smirnov. Putyatinsky agreed, saying: “The acutest issue is training of qualified personnel.” To help resolve this challenge, CBoM has been running an internship programme called IB Universe for the past 12 months. “This allows students and recent graduates to acquire practical experience in various areas of investment business,” added Putyatinsky. According to Putyatinsky, educational programmes of that kind will eventually allow banks to train personnel in the working environment, producing a new wave of employees who will already be prepared to deal with new technologies, such as AI and machine learning.


EU Seeks Better Coordination to Battle Next Big Cyberattack


The news that Europol is trying to better prepare EU member states for the next big cyberattack comes as fresh warnings are being sounded that Russia is looking to interfere in upcoming EU parliamentary elections scheduled for May. On Thursday, CNBC reported that FireEye has found evidence that two advanced persistent threat groups are gearing up for more attacks in the coming months. The CNBC report specifically pointed to increasing activity from APT 28, the Russian-backed group that is also known as Fancy Bear and which is believed to been involved in different disruption campaigns around the world, including Sandworm, which has been linked to the NotPetya wiper-malware attack that was unleashed in July 2017. To help governments better defend themselves against such attacks, numerous vendors - including Cloudflare, Google, Microsoft and Symantec - have moved to offer free services. In February, Microsoft announced that it would expand its AccountGuard, which provides protection and threat detection geared to blocking nation-state and APT activity, to 12 more European countries in preparation for the 2019 elections. 


Pivoting to digital maturity


Transformation initiatives are only as valuable as the business impact they drive. In our analysis of the survey results, therefore, we adopted a simple measure of digital maturity: the extent to which respondents said an organization’s digital transformation efforts are delivering business benefit. We then classified respondents into three segments—lower, median, and higher—according to the degree of business benefit they said they were achieving from their actions. Digital transformation is a continual process, and digital maturity is a moving target. So we present these as relative rather than absolute classifications. ... Achieving data mastery can entail an organization wide effort, sometimes under the direction of a chief data officer, to identify and evaluate data assets and build or acquire the necessary platforms and competencies. Eighty-eight percent of higher-maturity companies in our survey reported that they were obtaining a significant positive impact from their use of data, compared to just 24 percent of lower-maturity companies.


Algorithms have already taken over human decision making


In fact, algorithms operating without human intervention now play a significant role in financial markets. For example, 85% of all trading in the foreign exchange markets is conducted by algorithms alone. The growing algorithmic arms race to develop ever more complex systems to compete in these markets means huge sums of money are being allocated according to the decisions of machines. On a small scale, the people and companies that create these algorithms are able to affect what they do and how they do it. But because much of artificial intelligence involves programming software to figure out how to complete a task by itself, we often don’t know exactly what is behind the decision-making. As with all technology, this can lead to unintended consequences that may go far beyond anything the designers ever envisaged. ... But the algorithms that amplified the initial problems didn’t make a mistake. There wasn’t a bug in the programming. The behaviour emerged from the interaction of millions of algorithmic decisions playing off each other in unpredictable ways, following their own logic in a way that created a downward spiral for the market.



Quote for the day:


"At the heart of great leadership is a curious mind, heart, and spirit." -- Chip Conley


Daily Tech Digest - March 25, 2019

Why Big Banks Are Losing To Tech Giants Over Open Banking

uncaptioned image
42% disagree that collaboration with fintech’s is needed for retail banks to innovate faster. Michal Kissos Hertzog, CEO of Pepper, said of the research: It highlights the size of the disconnect between traditional banks and their customers. Banks are not innovating fast enough, and the value proposition and consumer experience is nowhere it should be. It's not for lack of trying but the reality is that banks are failing to go fully digital and are falling further behind. However, it’s not all bad news - banks still retain consumer trust which is a position of tremendous strength and decision-makers understand how they need to improve. Only time will tell if they are able to deliver.” For banks in the U.K., research shows that decision-makers believe traditional retail banks are struggling to compete in the digital era. The vast majority (82%) say banks aren’t innovating fast enough to meet changing consumer demands for digital services, with almost half (48%) thinking that these banks are at least three years behind fintech rivals.



Finding real strength in numbers through data partnerships

Over the last two years, we’ve seen some form of the following paragraph on a presentation slide at almost every data-focused conference attended. The quote has been stolen, and re-stolen, from a TechCrunch article by Tom Goodwin, in which he said: "Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate. Something interesting is happening." Each of those companies has created massive value by crafting data partnership approaches and then delivering a value greater than any one dataset could provide on its own. This is data innovation, combined with masterfully executed consumer marketing and user experience. Each one embraced the Amazon vision of open data structures, internally and externally, to power their go-to-market value proposition. In other words, when it comes to data partnerships, the whole is often greater than the sum of its parts.


The Benefits Of Edge Computing In IoT


Edge computing in IoT implies having autonomous systems of devices at these endpoints (or the edge) that simultaneously gather information and respond to the information without having to communicate with a remotely constructed data center. Instead of having remote data centers and computational servers, the processing of data can be done right where the data is collected, eliminating the need for constant connectivity to centralized control systems and the problems inherently associated with such setups.  For instance, a software company that sells cloud-based mobile applications can have cloud servers based in multiple locations closer to users instead of in a single location that may lead to undesirable latency and a single point of failure in case of any mishap. If the centralized servers failed due to some reason, all application users would lose their data and access to services at once. Additionally, the servers would also have to deal with heavy traffic, causing latency and inefficiency. On the contrary, a decentralized system would ensure that all the data pertinent to specific users would be hosted in the closest data center them among multiple ones, minimizing latency and limiting the impact of any potential failure. 


Cohesity plans to put backup data to good use

As with Isilon’s OneFS file system, Cohesity’s SpanFS distributes storage across several nodes, ensures redundancy of data, indexes data by means of metadata and shares it across the network, NAS-style. SpanFS is not limited to physical nodes and can integrate its capacity with the cloud. It has replication functionality that allows it to continue activity from a remote site or the cloud in case of an incident. In addition to NFS and SMB access, it can share data via the object storage S3 protocol, that is widely used for cloud applications. SpanFS is part of Cohesity’s DataPlatform, which in part comprises access to admin functionality, including configuration, deduplication, replication, monitoring. Also among these is SnapTree, which allows use of cloned content to, for example, run project tests with real data. DataPlatform software can come on hardware from HPE, Dell or Cisco as appliances or in virtual appliance format. As an option, the Helios SaaS console allows the centralisation of administration for multiple DataPlatform clusters across a number of cloud sites.


Containers, cloud-centric app services boost support for DevOps world


The underlying proxy technology also provides transparent routing to multiple back-end components, Transport Layer Security (TLS) termination, etc, and crosscutting concerns (i.e. logging, security and data transfer) at the edge of systems. This is particularly valuable within an API gateway – the entry point into microservices-based applications from external API clients.  Further, F5 is introducing a new cloud-native application services platform, specifically designed for the apps your DevOps and AppDev teams care about most. One significant innovation is itsService Mesh incubation, Aspen Mesh. “While container orchestration tools like Kubernetes have solved microservice build and deploy issues, many runtime challenges remain unsolved,” said Kara Sprague, senior vice president and general manager of Application Services Business Unit at F5. “Our fully supported service mesh makes it easy to manage the complexity of microservice architecture.” 


Are You Setting IT Up To Stifle Your Innovation?

uncaptioned image
The fact is that manufacturing organizations are a bit late to enterprise self-service analytics, or should I say self-service data management, compared to more centrally managed or highly regulated organizations like financial services or healthcare companies. Such organizations have already been dabbling in big data, cloud, and machine learning with varying degrees of success for a decade. Many deployed self-service analytics environments years ago. Nowadays, they are experiencing the “trough of disillusionment,” setting them up to finally realize the fruits of artificial intelligence (AI) adoption. They’ve learned that going back to basics around data quality, governance, cataloging, and cloud-based data integration to facilitate “data democratization” is needed to take full advantage of more advanced technologies. Manufacturers can avoid the mistakes and costly learnings of other industries by doing it right the first time. However, their traditional plant-centric approach and tactile-oriented innovation viewpoint permeate – and potentially limit – IT-related innovation.


IT needs to make mobile unified communications a priority

Double-exposure shot of a businesswoman using a mobile phone, binary code and statistical graphs..
The need for safe, reliable, and easy-to-use communications tools has given rise to unified communications (UC), a strategy that integrates multiple communications modalities under a single management and security umbrella. The result is more effective communication, improved collaboration, and a boost to security and regulatory policies. Now that mobility is the primary networking vehicle for end users, it’s time for IT departments to make mobile unified communications (MUC) a priority. The most important benefit of MUC is the ability of organizations to finally leave behind the uncontrolled, untracked mish-mash of consumer-centric, carrier, and third-party communications tools traditionally applied over the years. Communications are a critical organizational resource; MUC is a much easier vehicle to manage and scale, and MUC offers the visibility and control that’s essential to enterprise IT deployments. These advantages will enable MUC to become the dominant provisioning strategy and mechanism for organizational communications over the next five to 10 years.


Ransomware, Cryptojacking, and Fileless Malware: Which is Most Threatening?

The drama of the subtitle actually understates the danger of fileless malware. Of ransomware, cryptojacking, and fileless malware, fileless malware is both the youngest and perhaps the most dangerous. Fileless malware, as the name suggests, doesn’t behave as traditional malware. Malware usually downloads a file onto the victim device or enterprise environment; this allows the legacy antivirus solutions to locate and remove them. Fileless malware doesn’t do this. Instead, it uploads a program to a native process on the endpoint such as Java or PowerShell. Then fileless malware forces the native program to run its code, which performs the malicious task concealed behind its normal processes. Legacy endpoint security systems, which depend on traditional threat signatures, can’t possibly detect these attacks. Often, fileless malware leaves no trace of itself behind. Hackers increasingly adopt fileless malware attacks because, especially against legacy solutions, they prove largely successful.


How to make sure your artificial intelligence project is heading the right way


"The research highlights how everyone involved in the use of AI and big data must have wider discussions about the outcome you're looking for, such as better health, and then work backwards to issues like data sharing and information security. You should always start with the outcome," he says. Baker suggests business leaders looking to ensure they focus on the right objectives for AI and data should consider establishing a public ethics board. Just like companies have executive boards to make decisions, these ethics panels can help organisations that are using emerging technology to make publicly minded decisions. "We know some tech companies, like Deep Mind, already do this," says Baker. "Don't assume that you know what the public wants or that the market research you conduct into public opinions is correct. You need to actually have an ethics panel, and discuss what the issues are and what the needs of the public really are."


Small businesses hit hardest by cyber crime costs


The average cost of cyber attacks to small businesses was £65,000 in damaged assets, financial penalties and business downtime. The puts the total cost of cyber crime across all UK small businesses in 2018 at an estimated £13.6bn. This represents 80% of the financial impact of cyber attacks on all UK business in the past year, with a third reporting that they were hit by cyber crime. The survey, conducted by research consultancy Opinium, found that while phishing emails claimed the greatest number of victims (25%), ransomware attacks were the most financially damaging, costing victims £21,000 each on average. Although the trend for large businesses to fall victim at the highest rate continued, with seven in every 10 companies of more than 250 people being hit, the rate at which small companies succumbed to cyber criminals reached its highest level since Beaming started surveying business leaders in 2016. Nearly two-thirds (63%) of small businesses reported being a victim of cyber crime in 2018, up from 47% of small businesses in 2017 and 55% in 2016.



Quote for the day:


"Strategy is not really a solo sport even if you're the CEO." -- Max McKeown


Daily Tech Digest - March 24, 2019

Service Brokering & Enterprise Standard - Build Your Competitive Advantage In The Digital World

Implementing service brokering within an organization requires a fundamental change in culture as the focus needs to evolve from function/technology to service and service delivery. Rather than silos focused around technologies, the organization should rally around teamwork to deliver each service in an optimal way as the broker is central in the integration process between provider and consumer. This is the most difficult aspect when implementing brokering. Changing the way people work, evolving their behaviors to be more user focused takes time. Unfortunately, IT departments have no choice, either they are able to deliver the services required by the users through the supply chain they have developed or they will focus on managing the legacy environments, which may not be seen as a very exciting job. Multiple service use cases are documented in the guide. For each of them the roles and responsibilities of each of the players differ, but efficient service delivery can only be assured if the providers work smoothly and transparently together. 


Technical Debt and Scrum: Who Is Responsible?

Technical Debt & Scrum: Who Is Responsible?
The issue is that there is not just the typical hack, the technical shortcut that is beneficial today, but expensive tomorrow that creates technical debt. (A not uncommon tactic in feature factories.) There is also a kind of technical debt that is passively created when the Scrum Team learns more about the problem it is trying to solve. Today, the Development Team might prefer a different solution by comparison to the one the team implemented just six months ago. Or perhaps the Development Team upgrades the definition of “Done,” thus introducing rework in former product Increments. No matter from what angle you look at the problem, you cannot escape it, and Scrum does not offer a silver bullet either. ... the Scrum Guide is deliberately vague on the question of who is responsible for the technical debt to foster collaboration and self-organization, starting with the Scrum values — courage, and openness come to mind — leading straight to transparency and Scrum’s inherent system of checks and balances.


uncaptioned image
Cybersecurity is an attractive career for ambitious people and a great way to make the world a better place. If you want a career in cybersecurity, don’t wait. You don’t need to be of a particular age or gender. You don’t need any particular approval or certification or study place to get going. Just start learning and start doing. Get involved any way you can. Bug bounties is a great way to learn and test your skills. Check out Hacker101. Just know that even if you can jump straight in, you will need skill, tenacity and patience to ultimately reach a rewarding level of proficiency. Bug hunters may need a year or two of learning before the start finding security vulnerabilities worth reporting. Most bug hunters study the Hacktivity feed where vulnerability reports are published once the vulnerability has been fixed. Also note that to go far and to become a technical expert on cybersecurity, a lot of studying will be needed. What you invest in learning will come back as career opportunity. A degree in Computer Science will not hurt.



Three Steps to Regain Control over your IT Landscape

Most IT landscapes of larger companies consist of hundreds of applications that are interconnected via poorly designed interfaces. In most companies, these IT landscapes already have an enormous technical debt (i.e., an ‘unnecessary complexity’). In my experience, a company typically runs between 80% and 90% more IT applications (and therefore also servers, databases, networks, costs) compared to what would be needed if it had implemented the ideal architecture. A tremendous waste of money and resources, and the reason why IT is perceived as tardy and as a cost factor and not as an enabler. From my point of view, there are three major reasons for this disastrous situation ... There is a tendency to blame the IT department for this situation, but that’s not true. It’s a business problem. Requirements are typically not consolidated well across departments. IT has always just been the contractor who had to implement those punctual requirements under time pressure.


Like Football, Your Cybersecurity Defense Needs a Strong Offense

Like Football, Your Cyber Security Defense Needs a Strong Offense
Today, it’s essential to not only build the strongest possible defenses but also to deploy creative strategies to gain information on your attackers and how they are trying to breach your networks and penetrate your systems. This idea that “the best defense is a good offense” is not just a slogan representing the conventional wisdom of the cybersecurity intelligentsia. ... In “The Future of Cybersecurity: The Best Defense Is a Good Offense,” the company speaks directly to all organizations when it waves the following red flag: With the sophisticated techniques threat actors are using to mask their activities, the traditional approach of ‘building bigger fences’ will no longer suffice. The only way organizations can protect themselves is by unleashing offensive cyber techniques to uncover advanced adversaries on their networks. As an example of what going on the offensive might look like, one strategy the company uses is to configure fake computers in a phony, intentionally vulnerable network that functions as “a virtual mousetrap” to lure cyber adversaries; when the hackers bust in, they reveal valuable information about their identities, tactics and intentions.


Cybersecurity: Don’t let the small stuff cause you big problems

Organisations of all sizes in all sectors need to have a cybersecurity strategy, but for healthcare, it's particularly important. Not only do IT networks within hospitals and doctors' surgeries need to be accessible and secure in order to provide patient care, these networks involve medical information – some of the most sensitive data that can be held about people. "What's really important is having control over the data and knowing where it is. It's the same issue that's dealt with in many other industries, but to an extra level of duty of care for the people whose data you've got," said Sian John, chief security advisor for EMEA at Microsoft. "You're talking about privacy: it's one level when you're talking about financial data, it's another level if that's my medical history," she added. What's important for health organisations as a whole is being absolutely sure how data is controlled and how it is accessed – and making knowing a priority.


Some Cybersecurity Vendors Are Resorting To Lies & Blackmail


It’s hard for cybersecurity companies to get noticed. Smaller vendors particularly struggle because top corporations already have contracts or strong customer relationships with the biggest companies. This is where the threat of negative media coverage comes in. Exposing a security flaw, no matter how small, can garner big headlines if it’s at a big company. Enough press coverage can spark weeks of outrage and land top leaders in front of Congress. However, breaches that actually cause damage are relatively rare. As a result, vendors often try to make a big deal out of minor breaches that don’t expose important company or customer information. For instance, all four executives said vendors tried to draw their attention to potentially exposed data on Amazon and Microsoft Azure cloud servers. None of this data included any current material information. In one case, a database housed business plans for a 10-year-old project that had already been reported on and was now irrelevant. In another case, the data included information about customers — but only their names and the fact that they had attended a technology conference several years earlier.



When Scrum Is Not The Right Answer

As organizations have bought into adopting an Agile approach to software development, I've noticed that one corporation's identification with terms like Agile or Scrum may differ from another's. Almost as if they are deciding how they wish to utilize Agile concepts to best meet the needs of their teams. I am really okay with this approach, as I noted in the article, "Agile For the Sake of Being Agile." But, what if Agile or Scrum is not the right answer at all? ... While the flow is certainly more Kanban than anything else, the goal is to keep the flow of work moving forward. Tickets pushed back to the to-do column would not need to go back to the original developer, but could be handled by any other developer, since the code has since been merged. An alternate flow could be that the REVIEW and TEST columns are swapped, delaying the merge until after testing has completed — but that was not suggested initially, since in order to keep the flow of working moving as quickly as possible. After all, the key is to meet the aggressive deadline.



Keep in mind, a cloud move is not as simple as downloading new software. It’s an entirely new and different ecosystem, one that involves a list of risks: legal, financial, commercial, and compliance, to name a few. To make such a move without stopping long enough to become informed of the dangers is not a good idea. It’s also not as simple as learning which vulnerabilities and threats are sitting out there at any particular moment in time. Threats evolve over time. Old ones become less effective or fall out of favor with hackers and new ones emerge. ... The problem is that you don’t have direct access to see where your data is stored and verify that deleted data has actually been deleted. To a large extent, you have to take it on faith that your CSP does what it says. Consider the structure of the cloud. There’s a good chance your data is spread over several different devices and in different physical locations for redundancy. Further, the actual deletion process is not the same among providers.


Why We Are Making Things so Complicated

There are many reasons: First, Joseph is dealing with the laws of physics – in a brilliant way I should add. In the virtual world of software-based solutions, such laws don’t apply. Furthermore, I suspect that Joseph had to go to a dozen stores to buy all this apparatus and spend a lot of time finding the right gizmos to fit his process. In software-based solutions, you just click, download it, resize it, or copy and paste it ad infinitum if you wish. It is usually simple, often effortless. It can also go in all directions, augment the overall complexity, but still your IT staff will find a way to make it work. In other words, the drawback of computer-based solutions is that it is easy to “clog your kitchen” as in the video. Second, after Joseph is done with video-making, he cleans the kitchen before the in-laws come for dinner. Your IT-based solutions support your business and they stay there as long as you’re operating. As easy as it is to fill the kitchen with software-based components, it is proportionately as difficult to empty the room – unless it was planned for.



Quote for the day:


"Brilliant strategy is the best route to desirable ends with available means." -- Max McKeown


Daily Tech Digest - March 23, 2019

Digital Convergence’s Impact on OT Security

istock 1023224312
A significant component of the challenge is that IT and OT networks are founded on very different, and often highly contradictory priorities. IT networks generally follow the well-established Confidentiality/Integrity/Availability (CIA) model. The emphasis in on ensuring the confidentialityof critical data, transactions, and applications, maintaining network and data integrity, and only then ensuring the protected availabilityof networked resources. These priorities tend to be the basic building blocks of any security strategy. Conversely, OT networks depend upon and operate with an exactly inverted model. The safetyand availabilityof resources is the topmost priority. Assembly lines, furnaces, generators, and other large systems simply should never go offline. Monitoring critical systems, such as pumps, valves, and thermostats is essential since any system errors can translate into huge financial loss, and pose catastrophic risk to the life and well-being of workers and communities.



Why Isn't Your Current Approach to Scaling Agile Working?


When looking to scale organizational agility, the people in your organization need to own their new way of working. For that to happen, they will have to create their own process that works in their specific context. When people create their process, they will learn what works for them, and then a new culture ‘the way we do things here’ will emerge. To implement someone else’s model is like providing an answer before knowing the question, which likely will not be successful. Instead, consider to start with the simplest process that works; then build upon it using Empirical Process Control and a framework that makes transparent to all what to improve; that framework is called Scrum. There is a story that in 2001 Toyota wanted to publish a book called “The Toyota Way”. Upon hearing of this, their CEO said he opposed the title, suggesting it should be called “The Toyota Way 2001” because next year their way of working would have changed.


Six Recommendations for Aspiring Data Scientists


One of the skills that I like to see data scientists demonstrate is the ability to make different components or systems work together in order to accomplish a task. In a data science role, there may not be a clear path to productizing a model and you may need to build something unique in order to get a system up and running. Ideally a data science team will have engineering support for getting systems up and running, but prototyping is a great skill for a data scientists to move quickly. My recommendation here is to try to get different systems or components to integrate within a data science workflow. This can involve getting hands on with tools such as Airflow in order to prototype a data pipeline. It can involve creating a bridge between different systems, such as the JNI-BWAPI project I started to interface the StarCraft Brood War API library with Java. Or it can involve gluing different components together within a platform, such as using GCP DataFlow to pull data from BigQuery, apply a predictive model, and store the results to Cloud Datastore.


Three Questions to Gauge Emotional Intelligence

For work teams to succeed, your employees need to trust one another. It’s been found that high-trust environments promote higher worker engagement, with the research finding that on the opposite end, when trust is compromised, people “become withdrawn and disengaged.” ... Building trust requires multiple emotional intelligence competencies. It means understanding what the other person is expressing, sensing what they’re feeling, being conscious of your own behavior, and altering your behaviors with each individual. I’ve found this interview question is a great opportunity to probe how much thought a candidate gives to all these elements. ... Increasingly, employees and customers are flocking to companies that have a social purpose — a desire to do something good for the world — in addition to their profit motives. EY reports that these companies have been shown to far outperform the S&P average. If your company has a purpose, a candidate who has prepared for the interview will likely know it. But asking them to recite a line they read somewhere on your corporate website won’t tell you much.


Improve help desk management for smooth IT operations


A regular time sink in IT management is duplicate work in the help desk from a lack of communication among systems administrators, developers or other support staff. Recurrent problems are fixed superficially and are liable to arise again in a future ticket. Each fix increases the burden of platform maintenance, as help desk agents apply change after change. While specific log restraints streamline issue management, industry analyst Clive Longbottom presented another option for help desk management improvement: Adopt a natural language processing and knowledge management system. NLP augments help desk management with a system that analyzes the language in tickets, compares it to previous entries and helps identify patterns. Knowledge management also helps discover relationships between current and past issues and alerts IT staff to those connections to provide greater context for resolution. Legacy IT service management systems are reactive and require a person or machine to open the ticket before it can be resolved. Through the implementation of AI, IT teams turn the help desk into a proactive system -- and reduce their workloads.


Defining a Distinguished Engineer

A technical leader should build up others and empower their colleagues to do things that are more challenging than what they might think they are capable of. This is key for growing other members of an organization. I personally believe you don’t need a high title to take on a hard task, you just need the support and faith that you are capable of handling it. That support should come from the distinguished engineer and be reflected in their behavior towards others. A technical leader should also make time for growing and mentoring others. They should be approachable and communicate with their peers and colleagues in a way that makes them approachable. They should welcome newcomers to the team and treat them as peers from day one. A distinguished engineer should never tear others down but they should be capable of giving constructive criticism on technical work. This does not mean finding something wrong just to prove their brilliance; no, that would make them the brilliant jerk


Why AI will make healthcare personal

A control monitor during a heart catheterization operation.
AI is already contributing to reducing deaths due to medical errors. After heart disease and cancer, medical errors are the third-leading cause of death. Take prescription drug errors. In the US, around 7,000 people die each year from being given the wrong drug, or the wrong dosage of the correct drug. To help solve the problem, Bainbridge Health has designed a system that uses AI to take the possibility of human error out of the process, ensuring that hospital patients get the right drug at the right dosage. The system tracks the entire process, step-by-step, from the prescription being written to the correct dosage being given to the patient. Health insurance company Humana is using AI to augment its human customer service. The system can send customer service agents real-time messages about how to improve their interaction with callers. It’s also able to identify those conversations that seem likely to escalate and alert a supervisor so that they’re ready to take the call, if necessary. This means the caller isn’t put on hold, improving the customer experience and helping to resolve issues faster.


Agile in Higher Education: Experiences from The Open University

Thinking about the enterprise agility theme, as described in great recent books by Sriram Narayan (Agile IT Organization Design) and Sunil Mundra (Enterprise Agility), I am afraid to say that universities in the UK are going in the opposite direction, by consolidating their academic schools and departments into bigger and bigger mega faculties, and everyone else into 'professional-services' mega units, so you see lots of large, functional, activity-oriented teams in silos with huge costs of communication and collaboration, slow decision making, and low levels of customer focus and staff empowerment. But universities are starting to wake up to the potential of agile, and some are using agility to transform their strategy and delivery at the organisational level. National University of Singaporeis a great example of this for the UK higher education sector. The Open University is the largest university in the UK, with 200,000 students. Each year we produce nearly 200 new online courses, and update 300 more.


AI: A new route for cyber-attacks or a way to prevent them?

AI: A new route for cyber-attacks or a way to prevent them? image
If deployed correctly, AI can collect intelligence about new threats, attempted attacks and successful breaches – and learn from it all, says Dan Panesar, VP EMEA, Certes Networks. “AI technology has the ability to pick up abnormalities within an organisation’s network and flag them more quickly than a member of the cyber security or IT team could,” he says. Indeed, current iterations of machine learning have proven to be more effective at finding correlations in large data sets than human analysts, says Sam Curry, chief security officer at Cybereason. “This gives companies an improved ability to block malicious behaviour and reduce the dwell time of active intrusions.” It is true that AI increases efficiency, but the technology isn’t intended to completely replace human security analysts. “It’s not to say we are replacing people – we are augmenting them,” says Neill Hart, head of productivity and programs at CSI. However, AI and machine learning also have a dark side: the technology is also being harnessed by criminals. It would be short-sighted to think that the technological advancements offered by AI will provide a complete barrier against the fallout of cyber-attacks, says Helen Davenport, director, Gowling WLG.


How Do You Know When A Cybersecurity Data Breach Is Over?

uncaptioned image
The answer is often a surprise. It isn’t over when you’ve removed a hacker or insider threat from your network environment, just as it doesn’t begin with the discovery of patient zero of a cyber attack. It ends when your organizational attitudes toward cybersecurity revert to what they were before the breach. The question is: "Is the return to 'business as usual' a good thing?" Usually not, especially when you think about how the breach began. Most organizations I've worked with assume a data breach begins when a hacker penetrates your network. But it actually starts long before — with the sum of bad security habits, mismanaged mergers and acquisitions, budget decisions that scrimp on security and bad choices like relying on outdated equipment or not deploying security patches. In this way, a breach can be a good thing because it wakes everyone up — it serves as the greatest security awareness exercise possible. When a breach occurs, everyone is interested in information security for a brief duration — from the incident response and mitigation teams to public relations.



Quote for the day:


"Leadership is a journey, not a destination. It is a marathon, not a sprint. It is a process, not an outcome." - John Donahoe