Daily Tech Digest - March 25, 2019

Why Big Banks Are Losing To Tech Giants Over Open Banking

uncaptioned image
42% disagree that collaboration with fintech’s is needed for retail banks to innovate faster. Michal Kissos Hertzog, CEO of Pepper, said of the research: It highlights the size of the disconnect between traditional banks and their customers. Banks are not innovating fast enough, and the value proposition and consumer experience is nowhere it should be. It's not for lack of trying but the reality is that banks are failing to go fully digital and are falling further behind. However, it’s not all bad news - banks still retain consumer trust which is a position of tremendous strength and decision-makers understand how they need to improve. Only time will tell if they are able to deliver.” For banks in the U.K., research shows that decision-makers believe traditional retail banks are struggling to compete in the digital era. The vast majority (82%) say banks aren’t innovating fast enough to meet changing consumer demands for digital services, with almost half (48%) thinking that these banks are at least three years behind fintech rivals.



Finding real strength in numbers through data partnerships

Over the last two years, we’ve seen some form of the following paragraph on a presentation slide at almost every data-focused conference attended. The quote has been stolen, and re-stolen, from a TechCrunch article by Tom Goodwin, in which he said: "Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate. Something interesting is happening." Each of those companies has created massive value by crafting data partnership approaches and then delivering a value greater than any one dataset could provide on its own. This is data innovation, combined with masterfully executed consumer marketing and user experience. Each one embraced the Amazon vision of open data structures, internally and externally, to power their go-to-market value proposition. In other words, when it comes to data partnerships, the whole is often greater than the sum of its parts.


The Benefits Of Edge Computing In IoT


Edge computing in IoT implies having autonomous systems of devices at these endpoints (or the edge) that simultaneously gather information and respond to the information without having to communicate with a remotely constructed data center. Instead of having remote data centers and computational servers, the processing of data can be done right where the data is collected, eliminating the need for constant connectivity to centralized control systems and the problems inherently associated with such setups.  For instance, a software company that sells cloud-based mobile applications can have cloud servers based in multiple locations closer to users instead of in a single location that may lead to undesirable latency and a single point of failure in case of any mishap. If the centralized servers failed due to some reason, all application users would lose their data and access to services at once. Additionally, the servers would also have to deal with heavy traffic, causing latency and inefficiency. On the contrary, a decentralized system would ensure that all the data pertinent to specific users would be hosted in the closest data center them among multiple ones, minimizing latency and limiting the impact of any potential failure. 


Cohesity plans to put backup data to good use

As with Isilon’s OneFS file system, Cohesity’s SpanFS distributes storage across several nodes, ensures redundancy of data, indexes data by means of metadata and shares it across the network, NAS-style. SpanFS is not limited to physical nodes and can integrate its capacity with the cloud. It has replication functionality that allows it to continue activity from a remote site or the cloud in case of an incident. In addition to NFS and SMB access, it can share data via the object storage S3 protocol, that is widely used for cloud applications. SpanFS is part of Cohesity’s DataPlatform, which in part comprises access to admin functionality, including configuration, deduplication, replication, monitoring. Also among these is SnapTree, which allows use of cloned content to, for example, run project tests with real data. DataPlatform software can come on hardware from HPE, Dell or Cisco as appliances or in virtual appliance format. As an option, the Helios SaaS console allows the centralisation of administration for multiple DataPlatform clusters across a number of cloud sites.


Containers, cloud-centric app services boost support for DevOps world


The underlying proxy technology also provides transparent routing to multiple back-end components, Transport Layer Security (TLS) termination, etc, and crosscutting concerns (i.e. logging, security and data transfer) at the edge of systems. This is particularly valuable within an API gateway – the entry point into microservices-based applications from external API clients.  Further, F5 is introducing a new cloud-native application services platform, specifically designed for the apps your DevOps and AppDev teams care about most. One significant innovation is itsService Mesh incubation, Aspen Mesh. “While container orchestration tools like Kubernetes have solved microservice build and deploy issues, many runtime challenges remain unsolved,” said Kara Sprague, senior vice president and general manager of Application Services Business Unit at F5. “Our fully supported service mesh makes it easy to manage the complexity of microservice architecture.” 


Are You Setting IT Up To Stifle Your Innovation?

uncaptioned image
The fact is that manufacturing organizations are a bit late to enterprise self-service analytics, or should I say self-service data management, compared to more centrally managed or highly regulated organizations like financial services or healthcare companies. Such organizations have already been dabbling in big data, cloud, and machine learning with varying degrees of success for a decade. Many deployed self-service analytics environments years ago. Nowadays, they are experiencing the “trough of disillusionment,” setting them up to finally realize the fruits of artificial intelligence (AI) adoption. They’ve learned that going back to basics around data quality, governance, cataloging, and cloud-based data integration to facilitate “data democratization” is needed to take full advantage of more advanced technologies. Manufacturers can avoid the mistakes and costly learnings of other industries by doing it right the first time. However, their traditional plant-centric approach and tactile-oriented innovation viewpoint permeate – and potentially limit – IT-related innovation.


IT needs to make mobile unified communications a priority

Double-exposure shot of a businesswoman using a mobile phone, binary code and statistical graphs..
The need for safe, reliable, and easy-to-use communications tools has given rise to unified communications (UC), a strategy that integrates multiple communications modalities under a single management and security umbrella. The result is more effective communication, improved collaboration, and a boost to security and regulatory policies. Now that mobility is the primary networking vehicle for end users, it’s time for IT departments to make mobile unified communications (MUC) a priority. The most important benefit of MUC is the ability of organizations to finally leave behind the uncontrolled, untracked mish-mash of consumer-centric, carrier, and third-party communications tools traditionally applied over the years. Communications are a critical organizational resource; MUC is a much easier vehicle to manage and scale, and MUC offers the visibility and control that’s essential to enterprise IT deployments. These advantages will enable MUC to become the dominant provisioning strategy and mechanism for organizational communications over the next five to 10 years.


Ransomware, Cryptojacking, and Fileless Malware: Which is Most Threatening?

The drama of the subtitle actually understates the danger of fileless malware. Of ransomware, cryptojacking, and fileless malware, fileless malware is both the youngest and perhaps the most dangerous. Fileless malware, as the name suggests, doesn’t behave as traditional malware. Malware usually downloads a file onto the victim device or enterprise environment; this allows the legacy antivirus solutions to locate and remove them. Fileless malware doesn’t do this. Instead, it uploads a program to a native process on the endpoint such as Java or PowerShell. Then fileless malware forces the native program to run its code, which performs the malicious task concealed behind its normal processes. Legacy endpoint security systems, which depend on traditional threat signatures, can’t possibly detect these attacks. Often, fileless malware leaves no trace of itself behind. Hackers increasingly adopt fileless malware attacks because, especially against legacy solutions, they prove largely successful.


How to make sure your artificial intelligence project is heading the right way


"The research highlights how everyone involved in the use of AI and big data must have wider discussions about the outcome you're looking for, such as better health, and then work backwards to issues like data sharing and information security. You should always start with the outcome," he says. Baker suggests business leaders looking to ensure they focus on the right objectives for AI and data should consider establishing a public ethics board. Just like companies have executive boards to make decisions, these ethics panels can help organisations that are using emerging technology to make publicly minded decisions. "We know some tech companies, like Deep Mind, already do this," says Baker. "Don't assume that you know what the public wants or that the market research you conduct into public opinions is correct. You need to actually have an ethics panel, and discuss what the issues are and what the needs of the public really are."


Small businesses hit hardest by cyber crime costs


The average cost of cyber attacks to small businesses was £65,000 in damaged assets, financial penalties and business downtime. The puts the total cost of cyber crime across all UK small businesses in 2018 at an estimated £13.6bn. This represents 80% of the financial impact of cyber attacks on all UK business in the past year, with a third reporting that they were hit by cyber crime. The survey, conducted by research consultancy Opinium, found that while phishing emails claimed the greatest number of victims (25%), ransomware attacks were the most financially damaging, costing victims £21,000 each on average. Although the trend for large businesses to fall victim at the highest rate continued, with seven in every 10 companies of more than 250 people being hit, the rate at which small companies succumbed to cyber criminals reached its highest level since Beaming started surveying business leaders in 2016. Nearly two-thirds (63%) of small businesses reported being a victim of cyber crime in 2018, up from 47% of small businesses in 2017 and 55% in 2016.



Quote for the day:


"Strategy is not really a solo sport even if you're the CEO." -- Max McKeown


Daily Tech Digest - March 24, 2019

Service Brokering & Enterprise Standard - Build Your Competitive Advantage In The Digital World

Implementing service brokering within an organization requires a fundamental change in culture as the focus needs to evolve from function/technology to service and service delivery. Rather than silos focused around technologies, the organization should rally around teamwork to deliver each service in an optimal way as the broker is central in the integration process between provider and consumer. This is the most difficult aspect when implementing brokering. Changing the way people work, evolving their behaviors to be more user focused takes time. Unfortunately, IT departments have no choice, either they are able to deliver the services required by the users through the supply chain they have developed or they will focus on managing the legacy environments, which may not be seen as a very exciting job. Multiple service use cases are documented in the guide. For each of them the roles and responsibilities of each of the players differ, but efficient service delivery can only be assured if the providers work smoothly and transparently together. 


Technical Debt and Scrum: Who Is Responsible?

Technical Debt & Scrum: Who Is Responsible?
The issue is that there is not just the typical hack, the technical shortcut that is beneficial today, but expensive tomorrow that creates technical debt. (A not uncommon tactic in feature factories.) There is also a kind of technical debt that is passively created when the Scrum Team learns more about the problem it is trying to solve. Today, the Development Team might prefer a different solution by comparison to the one the team implemented just six months ago. Or perhaps the Development Team upgrades the definition of “Done,” thus introducing rework in former product Increments. No matter from what angle you look at the problem, you cannot escape it, and Scrum does not offer a silver bullet either. ... the Scrum Guide is deliberately vague on the question of who is responsible for the technical debt to foster collaboration and self-organization, starting with the Scrum values — courage, and openness come to mind — leading straight to transparency and Scrum’s inherent system of checks and balances.


uncaptioned image
Cybersecurity is an attractive career for ambitious people and a great way to make the world a better place. If you want a career in cybersecurity, don’t wait. You don’t need to be of a particular age or gender. You don’t need any particular approval or certification or study place to get going. Just start learning and start doing. Get involved any way you can. Bug bounties is a great way to learn and test your skills. Check out Hacker101. Just know that even if you can jump straight in, you will need skill, tenacity and patience to ultimately reach a rewarding level of proficiency. Bug hunters may need a year or two of learning before the start finding security vulnerabilities worth reporting. Most bug hunters study the Hacktivity feed where vulnerability reports are published once the vulnerability has been fixed. Also note that to go far and to become a technical expert on cybersecurity, a lot of studying will be needed. What you invest in learning will come back as career opportunity. A degree in Computer Science will not hurt.



Three Steps to Regain Control over your IT Landscape

Most IT landscapes of larger companies consist of hundreds of applications that are interconnected via poorly designed interfaces. In most companies, these IT landscapes already have an enormous technical debt (i.e., an ‘unnecessary complexity’). In my experience, a company typically runs between 80% and 90% more IT applications (and therefore also servers, databases, networks, costs) compared to what would be needed if it had implemented the ideal architecture. A tremendous waste of money and resources, and the reason why IT is perceived as tardy and as a cost factor and not as an enabler. From my point of view, there are three major reasons for this disastrous situation ... There is a tendency to blame the IT department for this situation, but that’s not true. It’s a business problem. Requirements are typically not consolidated well across departments. IT has always just been the contractor who had to implement those punctual requirements under time pressure.


Like Football, Your Cybersecurity Defense Needs a Strong Offense

Like Football, Your Cyber Security Defense Needs a Strong Offense
Today, it’s essential to not only build the strongest possible defenses but also to deploy creative strategies to gain information on your attackers and how they are trying to breach your networks and penetrate your systems. This idea that “the best defense is a good offense” is not just a slogan representing the conventional wisdom of the cybersecurity intelligentsia. ... In “The Future of Cybersecurity: The Best Defense Is a Good Offense,” the company speaks directly to all organizations when it waves the following red flag: With the sophisticated techniques threat actors are using to mask their activities, the traditional approach of ‘building bigger fences’ will no longer suffice. The only way organizations can protect themselves is by unleashing offensive cyber techniques to uncover advanced adversaries on their networks. As an example of what going on the offensive might look like, one strategy the company uses is to configure fake computers in a phony, intentionally vulnerable network that functions as “a virtual mousetrap” to lure cyber adversaries; when the hackers bust in, they reveal valuable information about their identities, tactics and intentions.


Cybersecurity: Don’t let the small stuff cause you big problems

Organisations of all sizes in all sectors need to have a cybersecurity strategy, but for healthcare, it's particularly important. Not only do IT networks within hospitals and doctors' surgeries need to be accessible and secure in order to provide patient care, these networks involve medical information – some of the most sensitive data that can be held about people. "What's really important is having control over the data and knowing where it is. It's the same issue that's dealt with in many other industries, but to an extra level of duty of care for the people whose data you've got," said Sian John, chief security advisor for EMEA at Microsoft. "You're talking about privacy: it's one level when you're talking about financial data, it's another level if that's my medical history," she added. What's important for health organisations as a whole is being absolutely sure how data is controlled and how it is accessed – and making knowing a priority.


Some Cybersecurity Vendors Are Resorting To Lies & Blackmail


It’s hard for cybersecurity companies to get noticed. Smaller vendors particularly struggle because top corporations already have contracts or strong customer relationships with the biggest companies. This is where the threat of negative media coverage comes in. Exposing a security flaw, no matter how small, can garner big headlines if it’s at a big company. Enough press coverage can spark weeks of outrage and land top leaders in front of Congress. However, breaches that actually cause damage are relatively rare. As a result, vendors often try to make a big deal out of minor breaches that don’t expose important company or customer information. For instance, all four executives said vendors tried to draw their attention to potentially exposed data on Amazon and Microsoft Azure cloud servers. None of this data included any current material information. In one case, a database housed business plans for a 10-year-old project that had already been reported on and was now irrelevant. In another case, the data included information about customers — but only their names and the fact that they had attended a technology conference several years earlier.



When Scrum Is Not The Right Answer

As organizations have bought into adopting an Agile approach to software development, I've noticed that one corporation's identification with terms like Agile or Scrum may differ from another's. Almost as if they are deciding how they wish to utilize Agile concepts to best meet the needs of their teams. I am really okay with this approach, as I noted in the article, "Agile For the Sake of Being Agile." But, what if Agile or Scrum is not the right answer at all? ... While the flow is certainly more Kanban than anything else, the goal is to keep the flow of work moving forward. Tickets pushed back to the to-do column would not need to go back to the original developer, but could be handled by any other developer, since the code has since been merged. An alternate flow could be that the REVIEW and TEST columns are swapped, delaying the merge until after testing has completed — but that was not suggested initially, since in order to keep the flow of working moving as quickly as possible. After all, the key is to meet the aggressive deadline.



Keep in mind, a cloud move is not as simple as downloading new software. It’s an entirely new and different ecosystem, one that involves a list of risks: legal, financial, commercial, and compliance, to name a few. To make such a move without stopping long enough to become informed of the dangers is not a good idea. It’s also not as simple as learning which vulnerabilities and threats are sitting out there at any particular moment in time. Threats evolve over time. Old ones become less effective or fall out of favor with hackers and new ones emerge. ... The problem is that you don’t have direct access to see where your data is stored and verify that deleted data has actually been deleted. To a large extent, you have to take it on faith that your CSP does what it says. Consider the structure of the cloud. There’s a good chance your data is spread over several different devices and in different physical locations for redundancy. Further, the actual deletion process is not the same among providers.


Why We Are Making Things so Complicated

There are many reasons: First, Joseph is dealing with the laws of physics – in a brilliant way I should add. In the virtual world of software-based solutions, such laws don’t apply. Furthermore, I suspect that Joseph had to go to a dozen stores to buy all this apparatus and spend a lot of time finding the right gizmos to fit his process. In software-based solutions, you just click, download it, resize it, or copy and paste it ad infinitum if you wish. It is usually simple, often effortless. It can also go in all directions, augment the overall complexity, but still your IT staff will find a way to make it work. In other words, the drawback of computer-based solutions is that it is easy to “clog your kitchen” as in the video. Second, after Joseph is done with video-making, he cleans the kitchen before the in-laws come for dinner. Your IT-based solutions support your business and they stay there as long as you’re operating. As easy as it is to fill the kitchen with software-based components, it is proportionately as difficult to empty the room – unless it was planned for.



Quote for the day:


"Brilliant strategy is the best route to desirable ends with available means." -- Max McKeown


Daily Tech Digest - March 23, 2019

Digital Convergence’s Impact on OT Security

istock 1023224312
A significant component of the challenge is that IT and OT networks are founded on very different, and often highly contradictory priorities. IT networks generally follow the well-established Confidentiality/Integrity/Availability (CIA) model. The emphasis in on ensuring the confidentialityof critical data, transactions, and applications, maintaining network and data integrity, and only then ensuring the protected availabilityof networked resources. These priorities tend to be the basic building blocks of any security strategy. Conversely, OT networks depend upon and operate with an exactly inverted model. The safetyand availabilityof resources is the topmost priority. Assembly lines, furnaces, generators, and other large systems simply should never go offline. Monitoring critical systems, such as pumps, valves, and thermostats is essential since any system errors can translate into huge financial loss, and pose catastrophic risk to the life and well-being of workers and communities.



Why Isn't Your Current Approach to Scaling Agile Working?


When looking to scale organizational agility, the people in your organization need to own their new way of working. For that to happen, they will have to create their own process that works in their specific context. When people create their process, they will learn what works for them, and then a new culture ‘the way we do things here’ will emerge. To implement someone else’s model is like providing an answer before knowing the question, which likely will not be successful. Instead, consider to start with the simplest process that works; then build upon it using Empirical Process Control and a framework that makes transparent to all what to improve; that framework is called Scrum. There is a story that in 2001 Toyota wanted to publish a book called “The Toyota Way”. Upon hearing of this, their CEO said he opposed the title, suggesting it should be called “The Toyota Way 2001” because next year their way of working would have changed.


Six Recommendations for Aspiring Data Scientists


One of the skills that I like to see data scientists demonstrate is the ability to make different components or systems work together in order to accomplish a task. In a data science role, there may not be a clear path to productizing a model and you may need to build something unique in order to get a system up and running. Ideally a data science team will have engineering support for getting systems up and running, but prototyping is a great skill for a data scientists to move quickly. My recommendation here is to try to get different systems or components to integrate within a data science workflow. This can involve getting hands on with tools such as Airflow in order to prototype a data pipeline. It can involve creating a bridge between different systems, such as the JNI-BWAPI project I started to interface the StarCraft Brood War API library with Java. Or it can involve gluing different components together within a platform, such as using GCP DataFlow to pull data from BigQuery, apply a predictive model, and store the results to Cloud Datastore.


Three Questions to Gauge Emotional Intelligence

For work teams to succeed, your employees need to trust one another. It’s been found that high-trust environments promote higher worker engagement, with the research finding that on the opposite end, when trust is compromised, people “become withdrawn and disengaged.” ... Building trust requires multiple emotional intelligence competencies. It means understanding what the other person is expressing, sensing what they’re feeling, being conscious of your own behavior, and altering your behaviors with each individual. I’ve found this interview question is a great opportunity to probe how much thought a candidate gives to all these elements. ... Increasingly, employees and customers are flocking to companies that have a social purpose — a desire to do something good for the world — in addition to their profit motives. EY reports that these companies have been shown to far outperform the S&P average. If your company has a purpose, a candidate who has prepared for the interview will likely know it. But asking them to recite a line they read somewhere on your corporate website won’t tell you much.


Improve help desk management for smooth IT operations


A regular time sink in IT management is duplicate work in the help desk from a lack of communication among systems administrators, developers or other support staff. Recurrent problems are fixed superficially and are liable to arise again in a future ticket. Each fix increases the burden of platform maintenance, as help desk agents apply change after change. While specific log restraints streamline issue management, industry analyst Clive Longbottom presented another option for help desk management improvement: Adopt a natural language processing and knowledge management system. NLP augments help desk management with a system that analyzes the language in tickets, compares it to previous entries and helps identify patterns. Knowledge management also helps discover relationships between current and past issues and alerts IT staff to those connections to provide greater context for resolution. Legacy IT service management systems are reactive and require a person or machine to open the ticket before it can be resolved. Through the implementation of AI, IT teams turn the help desk into a proactive system -- and reduce their workloads.


Defining a Distinguished Engineer

A technical leader should build up others and empower their colleagues to do things that are more challenging than what they might think they are capable of. This is key for growing other members of an organization. I personally believe you don’t need a high title to take on a hard task, you just need the support and faith that you are capable of handling it. That support should come from the distinguished engineer and be reflected in their behavior towards others. A technical leader should also make time for growing and mentoring others. They should be approachable and communicate with their peers and colleagues in a way that makes them approachable. They should welcome newcomers to the team and treat them as peers from day one. A distinguished engineer should never tear others down but they should be capable of giving constructive criticism on technical work. This does not mean finding something wrong just to prove their brilliance; no, that would make them the brilliant jerk


Why AI will make healthcare personal

A control monitor during a heart catheterization operation.
AI is already contributing to reducing deaths due to medical errors. After heart disease and cancer, medical errors are the third-leading cause of death. Take prescription drug errors. In the US, around 7,000 people die each year from being given the wrong drug, or the wrong dosage of the correct drug. To help solve the problem, Bainbridge Health has designed a system that uses AI to take the possibility of human error out of the process, ensuring that hospital patients get the right drug at the right dosage. The system tracks the entire process, step-by-step, from the prescription being written to the correct dosage being given to the patient. Health insurance company Humana is using AI to augment its human customer service. The system can send customer service agents real-time messages about how to improve their interaction with callers. It’s also able to identify those conversations that seem likely to escalate and alert a supervisor so that they’re ready to take the call, if necessary. This means the caller isn’t put on hold, improving the customer experience and helping to resolve issues faster.


Agile in Higher Education: Experiences from The Open University

Thinking about the enterprise agility theme, as described in great recent books by Sriram Narayan (Agile IT Organization Design) and Sunil Mundra (Enterprise Agility), I am afraid to say that universities in the UK are going in the opposite direction, by consolidating their academic schools and departments into bigger and bigger mega faculties, and everyone else into 'professional-services' mega units, so you see lots of large, functional, activity-oriented teams in silos with huge costs of communication and collaboration, slow decision making, and low levels of customer focus and staff empowerment. But universities are starting to wake up to the potential of agile, and some are using agility to transform their strategy and delivery at the organisational level. National University of Singaporeis a great example of this for the UK higher education sector. The Open University is the largest university in the UK, with 200,000 students. Each year we produce nearly 200 new online courses, and update 300 more.


AI: A new route for cyber-attacks or a way to prevent them?

AI: A new route for cyber-attacks or a way to prevent them? image
If deployed correctly, AI can collect intelligence about new threats, attempted attacks and successful breaches – and learn from it all, says Dan Panesar, VP EMEA, Certes Networks. “AI technology has the ability to pick up abnormalities within an organisation’s network and flag them more quickly than a member of the cyber security or IT team could,” he says. Indeed, current iterations of machine learning have proven to be more effective at finding correlations in large data sets than human analysts, says Sam Curry, chief security officer at Cybereason. “This gives companies an improved ability to block malicious behaviour and reduce the dwell time of active intrusions.” It is true that AI increases efficiency, but the technology isn’t intended to completely replace human security analysts. “It’s not to say we are replacing people – we are augmenting them,” says Neill Hart, head of productivity and programs at CSI. However, AI and machine learning also have a dark side: the technology is also being harnessed by criminals. It would be short-sighted to think that the technological advancements offered by AI will provide a complete barrier against the fallout of cyber-attacks, says Helen Davenport, director, Gowling WLG.


How Do You Know When A Cybersecurity Data Breach Is Over?

uncaptioned image
The answer is often a surprise. It isn’t over when you’ve removed a hacker or insider threat from your network environment, just as it doesn’t begin with the discovery of patient zero of a cyber attack. It ends when your organizational attitudes toward cybersecurity revert to what they were before the breach. The question is: "Is the return to 'business as usual' a good thing?" Usually not, especially when you think about how the breach began. Most organizations I've worked with assume a data breach begins when a hacker penetrates your network. But it actually starts long before — with the sum of bad security habits, mismanaged mergers and acquisitions, budget decisions that scrimp on security and bad choices like relying on outdated equipment or not deploying security patches. In this way, a breach can be a good thing because it wakes everyone up — it serves as the greatest security awareness exercise possible. When a breach occurs, everyone is interested in information security for a brief duration — from the incident response and mitigation teams to public relations.



Quote for the day:


"Leadership is a journey, not a destination. It is a marathon, not a sprint. It is a process, not an outcome." - John Donahoe