Daily Tech Digest - July 19, 2019

How edge computing is driving a new era of CDN

network traffic earth
It’s not that long ago that there was a transition from the heavy physical monolithic architecture to the agile cloud. But all that really happened was the transition from the physical appliance to a virtual cloud-based appliance. Maybe now is the time that we should ask, is this the future that we really want? One of the main issues in introducing edge applications is the mindset. It is challenging to convince yourself or your peers that the infrastructure you have spent all your time working on and investing in is not the best way forward for your business.  Although the cloud has created a big buzz, just because you migrate to the cloud does not mean that your applications will run faster. In fact, all you are really doing is abstracting the physical pieces of the architecture and paying someone else to manage it. The cloud has, however, opened the door for the edge application conversation. We have already taken the first step to the cloud and now it's time to make the second move. Basically, when you think about edge applications: its simplicity is a programmable CDN. A CDN is an edge application and an edge application is a superset of what your CDN is doing.



Despite BlueKeep Warnings, Many Organizations Fail to Patch

Despite BlueKeep Warnings, Many Organizations Fail to Patch
BlueKeep is a serious vulnerability that could enable attackers to compromise Remote Desktop Services in Windows, which enables access to networked computers via remote desktop protocol. Attackers who successfully exploit the flaw could gain full, remote access to a system, including the ability to create user accounts and give them full administrator privileges, as well as to execute any code. "The vulnerability requires no authentication and is regarded as 'wormable,' meaning that if it were successfully exploited it could be used by self-replicating malware to spread across the internet rapidly," security firm Sophos warns in a new report. "WannaCry and NotPetya used a similarly wormable flaw in Microsoft's SMB v1 to spread around the globe in a matter of hours." One saving grace - so far at least - is that security experts have yet to see any in-the-wild attacks that use BlueKeep. But until companies patch, they remain at risk. "Patching, or rather good cyber hygiene, is an integral component of every company's defense against cyberattacks," Raj Samani, chief scientist at McAfee, tells Information Security Media Group.


Microsoft gets boost in SaaS revenue and pushes Teams platform


The company said its Commercial Cloud business achieved annual revenue of $38bn, and grew by 39% in the quarter with revenue of $11bn, while its Intelligent Cloud business grew by 19% with revenue of $11.4bn. The company also reported growth of 23% in the number of commercial Office 365 seats and strong demand for Windows 10 among commercial PC manufacturers driven by end of support for Windows 7 in January 2020. Satya Nadella, chief executive officer of Microsoft, said: “Every day we work alongside our customers to help them build their own digital capability – innovating with them, creating new businesses with them, and earning their trust. This commitment to our customers’ success is resulting in larger, multi-year commercial cloud agreements and growing momentum across every layer of our technology stack.” During the earnings call, Nadella described Teams as Microsoft’s fastest-growing platform. “There is no question this last fiscal year has been an absolute breakout year for Teams in terms of both product innovation and, most importantly, at-scale deployment and usage,” he said.


Digital technologies and the future of geospatial data


Mapping an area correctly can be a painstaking responsibility, but it's easier with help from drones. They work especially well for geospatial analysis needs due to their maximum altitude capabilities of 400 feet and imaging technology that enables capturing ground image data in higher resolutions than satellites or planes. The versatility of drones makes them fantastic for a wide range of mapping projects. For example, a retail brand might use a drone to get details about terrain in the potential location of a new retail store. Then, construction companies can do something similar by factoring drone mapping data into their plans as new buildings or renovations get underway. One of the main reasons why drones are such a hot topic now is because people associate them with the rapid delivery of things they order from e-commerce stores. Although drones do make things more convenient that way, they are also used when companies plan the most efficient distribution routes. Geospatial mapping data offers information to e-commerce enterprises, whether people receive their shipments with drones or through other means.


Does net neutrality still matter in our post-web world?

grant-park-01.jpg
When the phrase was coined, it was in the context of a debate in the US Congress over the idea of a possible nationwide license for broadband service providers. States and municipalities were responsible for granting such licenses to limited geographies, and Republicans in the House were looking for new sources of revenue. Under the provisions of a never-passed law called the COPE Act, ISPs would be given incentives to purchase nationwide licenses instead of more localized ones. One such incentive was a waiver of enforcement of any laws or regulations restricting ISPs' right to divide their pipelines into "good/better/best" service tiers. There was substantive opposition, but Sen. Ron Wyden (D – Oregon) raised the stakes to a moral issue. At issue, he argued, was the small publisher's and garage-based enterprise's right to conduct their business on the same Internet like Google and eBay, as equal players in a digital market. Politically speaking, the concept of net neutrality has been as malleable as sediment from an Oregon mudslide.


Is SQL Beating NoSQL?

What we need is an interface that allows pieces of this stack to communicate with one another. Ideally, something already standardized in the industry. Something that would allow us to swap in/out various layers with minimal friction. That is the power of SQL. Like IP, SQL is a universal interface. But SQL is in fact much more than IP. Because data also gets analyzed by humans. And true to the purpose that SQL’s creators initially assigned to it, SQL is readable. Is SQL perfect? No, but it is the language that most of us in the community know. And while there are already engineers out there working on a more natural language-oriented interface, what will those systems then connect to? SQL. ... SQL is back. Not just because writing glue code to kludge together NoSQL tools is annoying. Not just because retraining workforces to learn a myriad of new languages is hard. Not just because standards can be a good thing. But also because the world is filled with data. It surrounds us, binds us. At first, we relied on our human senses and sensory nervous systems to process it.


Container security improves overall enterprise IT posture  


Once apps reach the production Kubernetes environment, security policies enforced through Aqua allow all developers and IT ops pros read-only access to their activities. This improves and speeds up application development, and lets IT pros troubleshoot faster than they could with VMs -- in the past, Recurly's security staff more carefully restricted such access without automated whitelisting tools available for containers. Also, since containers separate application processes from the underlying host, admins can more strictly lock down the host itself with tools such as Google's Container-Optimized OS. "We are heavily running immutable hosts today, so even if you break out of a container and get on a host, good luck," Hosman said. "You can't run anything, install anything, or pivot to anything, and if we restart the host, everything just resets." Recurly's goal is to move away from human responses to alerts, whether they refer to IT monitoring or container security issues, and toward a remediation response to issues through code.


Microservices: Myth, Madness, or Magic?

The reality is, you almost always don't need microservices to achieve the above "holy grail", you just need a decent architecture. So let's redefine microservices: Microservices: Yet another concept to fix the bad architecture created by bad software developers and to make money for big businesses that feed on the bad software practices of others. An article on Medium writes "Conceptually, Microservices extend the same principles that engineers have employed for decades."2 Wrong. These principles have existed for decades, but "employed?" Hardly ever. Similarly, a post on New Relic states: "When using microservices, you isolate software functionality into multiple independent modules that are individually responsible for performing precisely defined, standalone tasks. These modules communicate with each other through simple, universally accessible application programming interfaces (APIs)."3 Wait, we need microservices to achieve this? Wasn't this the promise of OOP? Isn't this the promise of every newfangled framework like MVVM, Angular, and so forth?


Microsoft to explore using Rust

Rust
"A developer's core job is not to worry about security but to do feature work," Thomas said. "Rather than investing in more and more tools and training and vulnerability fixes, what about a development language where they can't introduce memory safety issues into their feature work in the first place? That would help both the feature developers and the security engineers-and the customers." Microsoft looking into Rust, as a safer alternative to C++ isn't actually such a big deal. The OS maker has been looking for safer C and C++ alternatives for years. In June 2016, Microsoft open-sourced "Checked C," an extension to the C programming language that brought new features to address a series of security-related issues. Microsoft looking into Rust before any other memory-safe language is also not a bad decision. Besides being superior to C# in regards to better memory protections, Rust is also more popular with developers these days and might be easier to recruit for. ... Developers love it because of its simpler syntax and the fact that apps coded in Rust don't yield the same amount of bugs, allowing developers to focus on expanding their apps, instead of doing constant maintenance work.


Data governance in the age of AI: Beyond the basics

Ensure governance team members have defined roles, including tactical and high-level strategy responsibilities, Smithson says. Split data champions into two groups: data stewards, who make recommendations about formulas or algorithms, for example, and director- or VP-level data owners who make the decisions, Walton adds. And put roles and responsibilities into job descriptions. “The job responsibilities come from the workflows and the tasks that need to be accomplished.” Those job descriptions should fall into two buckets, he says: data quality assurance and information consistency. For the former, tasks include identifying a data quality issue, remediating that issue with a workflow change, for example, and monitoring to ensure the effectiveness of the data governance initiative. For the latter, tasks include creating a business measure to support key performance indicators, to modify it when business rules change, and to sunset any items that are no longer relative. A bonus tip: Tie data owners’ bonuses to data quality. “That will get people’s attention,” Walton says.



Quote for the day:


"Leadership is the art of giving people a platform for spreading ideas that work." -- Seth Godin


Daily Tech Digest - July 18, 2019

CIOs must play a key role in ecosystem strategies

CIOs must play a key role in ecosystem strategies
Digital technologies emphasize the need for a more agile IT strategy with technology investments that support the future needs of the business. IT organizations will need to be multi-speed, taking advantage of business opportunities such as boosting customer engagement via new digital channels or winning emerging markets customers—alongside their traditional role as providers of technology capabilities and solutions. Information technology is critical in satisfying the heightened need for data insights, as today’s executives seek accurate, real-time information that supports decision making, reduces risk, and helps drive improvements. Accenture Strategy research, Cornerstone of future growth: Ecosystems, shows that companies in the United States clearly see advantages in ecosystems, and almost half of those surveyed are actively seeking them. Accenture Strategy surveyed 1,252 business leaders from diverse industries across the world, including 649 in the United States, to better understand the degree to which companies are capturing ecosystem opportunities. Survey results indicated executives’ desire to lead through adaptation and adoption. 



Digital transformation in the construction industry: is an AI revolution on the way? image
There is an appetite to change, with the construction industry looking at a range of technologies, on top of AI, that could help them in the future, with virtual reality (28%), cloud computing (24%), software defined networking (20%), blockchain (19%) and Internet of Things (17%) all seen as key to future development by those in larger organisations. According to Tech Nation’s 2018 report, technology is expanding 2.6 times faster than the rest of the UK economy, and yet the construction industry has been slow to implement digitalisation strategies that could bring increased efficiency and collaboration as well as reduced costs. The majority of the construction firms surveyed said they have either completed a digital transformation project or have one currently underway — over half (61%) noted improved efficiency and reduced operational costs (58%) as direct advantages.


Maintaining Security As You Invest in UCC

istock 980534858
Today’s modern workforce expectations for usability are unprecedented. Your users expect all their software to be simple to use and just work, no exceptions. The same goes for your customers – they don’t have the patience for a collaboration tool that isn’t immediately connected or intuitive. Don’t allow this expectation for ease-of-use to push you to security shortcuts. Users and administrators must be sensitive to default settings of the web applications being used to host their online meetings, and ensure permissions are set with both user experience and security at top of mind. Remind users to keep browsers up-to-date, including the latest security patches. Collaboration tools should never bypass operating system or browser security controls for the sake of simplicity to the end user. The risk is far greater than the reward.  Meeting spaces should be safe spaces for open collaboration and discussion. But online meetings have opened up every internal conversation to external hackers in a way that in-office meetings never have.



Is It The Platform Or Is It The Ecosystem?

The key to ecosystems is understanding that they represent a whole new economy. Apple’s App Store succeeded in part because of the extensive advocacy for Apple at the launch of the iPhone. At the time, I worked inside Nokia, and we could barely get airtime for Nokia innovations in the face of all the content that encircled an incumbent in a powerful industry like computing. It was only after a year or so that Apple understood it was creating opportunity for ‘the little guy’. It stumbled upon success with apps but its ecosystem was there long before Steve Jobs gave it the red light. Ecosystems thrive on information and content. They also thrive when they create multiple avenues for new businesses, as per the Airbnb example above. Like anything, they need strong branding and that gives incumbents the advantage.They thrive when they breach the walls of an established industry, allowing entrepreneurial passion to pour in. In healthcare, GE tried to establish an ecosystem for breast cancer diagnostics but in reality it only let in established healthcare firms.


5 Important Ways Jobs Will Change In The 4th Industrial Revolution

The Future Of Work: 5 Important Ways Jobs Will Change In The 4th Industrial Revolution
Rather than succumb to the doomsday predictions that “robots will take over all the jobs,” a more optimistic outlook is one where humans get the opportunity to do work that demands their creativity, imagination, social and emotional intelligence, and passion. Individuals will need to act and engage in lifelong learning, so they are adaptable when the changes happen. The lifespan for any given skill set is shrinking, so it will be imperative for individuals to continue to invest in acquiring new skills. The shift to lifelong learning needs to happen now because the changes are already happening. In addition, employees will need to shape their own career path. Gone are the days when a career trajectory is outlined at one company with predictable climbs up the corporate ladder. Therefore, employees should pursue a diverse set of work experiences and take the initiative to shape their own career paths. Individuals will need to step into the opportunity that pursuing your passion provides rather than shrink back to what had resulted in success in the past. 


Network capacity planning in the age of unpredictable workloads


To plan realistic capacity requirements, formal network engineers dive into the complex math of the Erlang B formula, and if you are inclined to learn it, check out the older book James Martin's Systems Analysis for Data Transmission. However, there are also easier rules of thumb. As a connection congests, it increases the risk of delay and packet loss in a nonlinear fashion. This tenet contributes to network capacity planning fundamentals. Problems ramp up slowly until the network reaches about 50% utilization; issues rise rapidly after that threshold. At 70% utilization, delay doubles, for example. Keep the connection, or gateway utilization, around the 50% level to avoid congestion during peaks. Unexpected traffic peaks often occur when a single transaction launches a complex multicomponent workflow and especially when traffic changes because of failover or scaling. The most significant network capacity planning decision is how to size the DCI network. It is the hub of all workflows, into and out of the cloud and to and from workers and internet users. The DCI network must never become congested.


The lost art of ethical decision making

ethics.jpg
Ethics need not be wildly complex, nor must you exemplify saintly behaviors or be infallible in your decision making. As you lead your teams, try to apply these guidelines. Implement the "newspaper test": When faced with a complex decision, especially one in which you're faced with a variety of bad options, imagine that an account of your decision and the behaviors and process that got you there were published in a front-page newspaper story. Would you be a sympathetic character who weighed the various options, treated the parties fairly, and respected your obligations as a leader, even though the outcome wasn't all rainbows and unicorns, or would you be portrayed as slyly manipulating circumstances for your benefit? Perhaps one of the most challenging concepts is that of "fairness," particularly around the human tendency to conflate fairness of a process with fairness of outcome. The former should be the goal of your own ethical standards, as that provides all parties with similar consideration, information, and standards. Where trouble arises is when you attempt to create a "fair" outcome that causes you to treat various parties and factors differently to justify an end result.


Protecting the edge of IoT as adoption rates for the technology grow

Protecting the edge of IoT as adoption rates grow image
You can see the problem: with the rapid increase of more edge devices, the risk of a data breach only multiplies for enterprises. Last year, for example, there were 1,244 data breaches, exposing 446.5 million records. This not only leads to significant business obstacles, but breaches also come at a high price — Ponemon Institute estimates the average cost of a data breach to exceed $3.5 million. This broader array of environments, coupled with the prevalence of data breaches, make it critical for enterprises to secure their computing infrastructure. “With the growth of IoT and the rising cost of data breaches, enterprises need a secure computing infrastructure more than ever,” confirms Damon Kachur, vice president, IoT Solutions, Sectigo. To meet this demand, Sectigo — the commercial Certificate Authority (CA) — have entered into a secure edge computing technology pact with NetObjex — an intelligent automation platform for tracking, tracing and monitoring digital assets using AI, blockchain, and IoT. 


Companies with zero-trust network security move toward biometric authentication

Composite image of binary code and biometric fingerprint scanning authorization.
"Fundamentally we've all figured out that you can't trust everything just because it's on the inside of your firewall; just because it's on your network," says Wendy Nather, director of Advisory CISOs at Duo Security, a multi-factor authentication (MFA) solutions provider that is now part of Cisco Systems. "So, if you agree with that, the question becomes: What are we trusting today that we really shouldn't be trusting and what should we be verifying even more than we have been? The answer is really that you have to verify users more carefully than you have before, you have to verify their devices and you need to do it based on the sensitivity of what they're getting access to, and you also need to do it frequently, not just once when you let them inside your firewall." "You should be checking early and often and if you're checking at every access request. you're more likely to catch things that you didn't know before," Nather says.


Lateral phishing used to attack organisations on global scale


Out of the organisations targeted by lateral phishing, more than 60% had multiple compromised accounts. Some had dozens of compromised accounts that sent lateral phishing attacks to additional employee accounts and users at other organisations. In total, researchers identified 154 hijacked accounts that collectively sent hundreds of lateral phishing emails to more than 100,000 unique recipients. A recent benchmarking report by security awareness training firm KnowBe4 shows that the average phish-prone percentage across all industries and sizes of organisations at 29.6% – up 2.6% since 2018. Large organisations in the hospitality industry have the highest phish-prone percentage (PPP) of 48%, and are therefore most likely to fall victim to a phishing attack, while the transportation industry is at the lowest risk, with large organisations in the sector scoring a PPP of just 16%. Because lateral phishing exploits the implicit trust in the legitimate accounts compromised, these attacks ultimately lead to increasingly large reputational harm for the initial victim organisation, the researchers said.



Quote for the day:


"It is not enough to have the right ingredients, you must bake the cake." -- Tim Fargo


Daily Tech Digest - July 17, 2019

PayPal-backed blockchain aims to help banks verify digital IDs


Security is critical to a project like this. “You could in theory have some centralized database that would store all the personal information for a group of banks,” Commons said. “But it's very difficult to do that while getting the appropriate consent and making sure that the information is really being shared on a truly need to know basis.” CV Madhukar, investment partner at Omidyar Network and global leader of the firm's work on digital identity, sees the project as a means of enabling financial inclusion and helping cut the cost of know-your-customer compliance. “One of the biggest use cases for digital identity is in financial inclusion, and one of the biggest challenges for financial inclusion is getting the KYC process right,” he said. “For the most vulnerable populations, getting KYC documentation is such a big challenge. Every time they need documents supporting KYC, they run into trouble. So whatever can ease the burden of the KYC process has value.” “This makes it easier for individuals and companies to get this done quickly and most importantly puts the user data in a safe place to access,” Madhukar said. “This is very central to protecting individuals’ privacy.”


An IoT security maturity model for IT/OT convergence



"On the IT security side, we're used to operating in a certain way," Carielli said. "We're used to saying, 'One Sunday a month for a couple hours we're going to shut down and apply patches.' It's OK from an IT perspective, but those paradigms don't work for OT." Shutting down a plant or factory, even for a couple of hours on a Sunday, could cost millions of dollars in lost production. And shutting down utilities, such as a smart grid, simply isn't feasible. "It's incumbent upon security folks to start to understand how operational technologies need to do business and how those may differ from what they're used to," Carielli said. IT security teams must also accept the fact that OT system lifecycles are much longer than those of IT systems, Carielli noted. This introduces legacy and brownfield equipment and applications IT teams aren't familiar with -- some of which have been in place for decades. Additionally, OT systems generally never connected to the internet in the past as they do today, so security wasn't built in from the start. Many industrial systems must be retrofitted with security controls -- which requires OT teams to adapt.


Security Flaw Exposed Valid Airline Boarding Passes
"It was possible to download valid boarding passes - not belonging to the user - for future flights due to an insecure direct object reference weakness within the application," Stubley tells Information Security Media Group. "Insecure direct object reference or IDOR vulnerabilities occur when an application provides direct access to objects based on user-supplied input, bypassing expected authentication and user access controls." Amadeus develops travel industry software used by 500 airlines - including United Airlines and Air Canada - as well as hotels, rail and cruise lines, tour operators and others. "Amadeus recently became aware of a configuration flaw affecting its Altéa Self Service Check-In solution," a spokeswoman tells ISMG. "Our security teams took immediate action and the vulnerability is now fixed. We are not aware of there having been any further unauthorized access resulting from the vulnerability, beyond the activity of the security researcher. We regret any inconvenience this might cause to our customers."



DevOps, The SDLC, and Agile Development


DevOps has a mutually beneficial relationship with Agile development, thus offering more flexibility than the other rigid structures which were present in the IT arena. The whole idea is directed at people working together and accepting change. It is this which allows the release of high quality software, and at a faster delivery speed. In addition to the coordination, the people-centric culture of DevOps is built on a particular viewpoint. It encourages a culture of open-mindedness, predictability, cross-skill training, and trying to do something extra. Thus, there is a development of shared identity between different teams. ... Solutions which are reached evolve through coordination between various cross-functional teams which use proper practices for their context. The focus thus lies on collaboration and self-organization of teams as the team together decides the next approach on things. Agile is hence a mindset which is made up of values which are contained in the Agile Manifesto. The whole idea behind Agile can be deduced from the first sentence of the Agile Manifesto, which says, “We are uncovering better ways of developing software by doing it and adjust accordingly."



How a Big Rock Revealed a Tesla XSS Vulnerability


Curry writes in a blog post that he'd been trying to find a flaw within Telsa's web browser, which is a pared-down version of Google's Chromium. Then in April, he experimented with naming his Tesla. Owners can assign their car a nickname, which is displayed in the mobile app. Curry set his car's name to "%x.%x.%x.%x." That's a type of format string attack. A vulnerable application may try to execute the string, causing unintended consequences. At one time, BMW's 2011 330i was vulnerable to this kind of attack, which could remotely crash the multimedia software due to an issue with its Bluetooth stack , designated CVE-2017-9212. But the naming approach didn't work. So he decided to change the car's name to a cross-site scripting payload that came from XSS Hunter, a tool for finding these types of vulnerabilities. Nothing happened, or at least not right away. Curry says he had a month of free time earlier this year and decided to drive across the U.S. "I went on this super long - probably like 70 hours of driving - road trip," he says.


Ransomware attacks: How to get the upper hand


"It's no different than a tornado or flood. Something happened. You assess the damage and if your recovery plan is solid, you're in good shape," he said, adding that the reverse is also true. If you haven't prepared and can't recover from ransomware, "you're screwed." Not only should an organization back up its files, but it should also place a layer between the server and its backup files so that a hacker won't see there are more assets to steal or freeze, Scott advised. Some ransomware attacks are sophisticated and can destroy metadata and passwords, he said, so it's best to preserve digital assets with a corresponding level of complexity. For instance, organizations could take the necessary disaster recovery steps now to perform a bare-metal restore -- essentially, a reinstall of operating systems and applications -- so that, later, all is not lost if ransomware locks out users. But even simple disaster recovery preparedness works as a defense against ransomware attacks.


Are CIOs Losing The Cyber Security Battle?

Are CIOs losing the cyber security battle? - CIO&Leader
Despite taking tangible steps to reduce their cybersecurity risk, a question that comes to mind is, ‘Why are companies still getting hit and more than ever?’ The report clarifies that there are some security holes not being plugged and it is here that CIOs need to pay greater attention. For example, the report explains, an up-to-date malware signature list won’t stop attackers hijacking your accounts, while rock-solid authentication won’t help if you’re not protecting your computers from ransomware. “Good cybersecurity demands defense in depth and proper risk assessment so that you can protect your weakest spots from attack first,” says the report. The survey also revealed that companies are facing attacks via multiple channels, including email (33%) and web (30%) among others. Software vulnerabilities and unauthorized USB sticks or other external devices were also common attack vectors. Perhaps even more worrying is that 20% of CIOs didn’t know how their networks were compromised. With cyber threats coming from supply chain attacks, phishing emails, software exploits, vulnerabilities, insecure wireless networks, and much more, businesses need a security solution that helps them eliminate gaps and better identify previously unseen threats.


Building blocks of an IIoT security architecture

Security framework functional building blocks
Since IIoT involves both IT and OT, ideally security and real-time situational awareness should span IT and OT subsystems seamlessly without interfering with any operational business processes. Average lifespan of an industrial system is currently 19 years. Greenfield deployments using the most current and secure technologies are not always feasible. Security technology must often be wrapped around an existing set of legacy systems that are difficult to change. In both greenfield and brownfield deployments, all affected parties -- manufacturers, systems integrators and equipment owner/operators -- must be engaged to create a more secure and reliable IIoT system. As there is no single "best way" to implement security and achieve adequately secure behavior, technological building blocks should support a defense-in-depth strategy that maps logical defensive levels to security tools and techniques. Due to the highly segregated nature of industrial systems, security implementation needs to be applied in multiple contexts.


Vulnerable firmware in enterprise server supply chain


The first vulnerability is a failure in the update process to perform cryptographic signature verification before accepting updates, while the second relates to command injection vulnerability in the code in the BMC that performs the firmware update process. Both of these issues allow an attacker running with administrative privileges on the host (such as through exploitation of a different host-based vulnerability) to run arbitrary code within the BMC, and malicious modifications to the BMC firmware can be used by an attacker to maintain persistence in the system and survive common incident response steps such as reinstallation of the operating system, the researchers found. An attack could also modify the environment within the BMC to prevent any further firmware updates through software mechanisms, thus enabling an attacker to disable the BMC permanently, and the update mechanism could be exploited remotely if the attacker has been able to capture the administration password for the BMC, the researchers said.


Martin Fowler Discusses New Edition of Refactoring, along with Thoughts on Evolutionary Architecture

Refactoring is the idea of trying to identify the sequence of small steps that allows you to make a big change. That core idea hasn’t changed. Several new refactorings in the book deal with the idea of transforming data structures into other data structures, Combine Functions into Transform for example. Several of the refactorings were removed or not added to the book in favor of adding them to a web edition of the book. A lot of the early refactorings are like cleaning the dirt off the glass of a window. You just need them to be able to see where the hell you are and then you can start looking at the broader ones. Refactorings can be applied broadly to architecture evolution. Two recent posts How to break a Monolith into Microservices, by Zhamak Dehghani, and How to extract a data-rich service from a monolith by Praful Todkar on MartinFowler.com deal with this specifically. Evolutionary architecture is a broad principle that architecture is constantly changing. While related to Microservices, it’s not Microservices by another name.



Quote for the day:

"It doesn't matter how much we know, what matters is how clearly others can understand what we know." -- Simon Sinek

Daily Tech Digest - July 16, 2019

Best tools for single sign-on (SSO)

login credential - user name, password - administrative controls - access control - single sign-on
Interestingly, most SSO products also cost about $8 per user per month but will require more IT manpower to implement. (Ping’s solution offers a lot of bang for the $3 per month price point, however.) Let’s talk a bit about using MFA, because it is an important motivation behind going the SSO route. The idea of using MFA used to be mostly for the ultra-paranoid. Now MFA is the minimum for enterprise security, especially considering the number and increasing sophistication of spear-phishing attacks. Sadly, the deployment of MFA is far from universal: a recent survey from Symantec (Adapting to the New Realities of Cloud Threats) found that two-thirds of the respondents still don’t deploy any MFA tools to protect their cloud infrastructures. Certainly, having SSO can help ease the pain and move toward broader MFA acceptance. Besides MFA, there is another reason to up your authentication game: the need for adaptive or risk-based authentication. This means changing your perspective from issuing your users an “all-day access pass” when they begin work by logging into their laptops.



Trump’s hostile view of Bitcoin and crypto could chill industry

bitcoin behind bars > cryptocurrency ban or restriction
Trump tweeted Facebook Libra's "virtual currency" will have little standing or dependability. "If Facebook and other companies want to become a bank, they must seek a new Banking Charter and become subject to all Banking Regulations, just like other Banks, both National," Trump wrote. Those comments came one day after he criticized both Facebook and Twitter for what he called bias against his supporters. Like other cryptocurrencies backed by fiat currency, Facebook's digital money would be purchased through a typical financial network and then stored in the Calibra digital wallet application for making purchases via ads on the social media platform. A user could also do the same thing through Facebook's most popular communication platforms: WhatsApp and Messenger. Facebook did not respond to questions by Computerworld about whether the president's comments would affect its plans to issue a cryptocurrency. Avivah Litan, a vice president of research at Gartner, said while it's "very difficult" to analyze Trump's intentions from his tweets, "it sounds to me like he is gearing up to clamp down on cryptocurrency adoption by Americans.


How to deal with cloud complexity

How to deal with cloud complexity
Many popular approaches that deal with architectural complexity tell you to practice architectural discipline so your systems won’t be complex in the first place. The assumption is that you build and migrate cloud systems in short, disconnected sprints with little regard for standard platforms such as storage, compute, security, and governance. Most migrations and net-new developments are done in silos without considering architectural commonality that would drive less complexity. More complexity becomes inevitable. Although many are surprised when they experience complexity, it’s not always bad. In most cases, we see excessive heterogeneity because those who pick different cloud services make best of breed a high priority. Complexity is the natural result. A good rule of thumb is to look at cloud operations or cloudops. If you’re staying on budget, and there are few or no outages and no breaches, then it’s likely that your complexity is under control. Revisit these metrics every quarter or so. If all continues to be well, you’re fine. You are one of the lucky few who deal with a less complex cloud implementation—for now.


Single Sign-Ons To Accelerate Growth Of Digital Identity: Study

Single sign-ons to accelerate growth of digital identity: Study - CIO&Leader
Wide varieties of countries have recently planned, or are planning, to bring digital identity to many citizens. It will have an effect on the kinds of digital identity security available to consumers, as many of these initiatives are intended to bring identity verification to those who have never had official identification before. That being the case, these schemes need to be accessible to those with low levels of digital access, and are likely to be SIM-based, rather than relying on an online presence as such. These initiatives will also be more likely to have a physical card than other forms of digital identity. This impacts a range of use cases and allows a more consistent application of identity verification than in the case of identities that do not connect to a physical asset. This is frequently because the core documentation on which the foundation of the identity is built contains a photograph as the core verification method. Other methods (such as fingerprint sensors) require additional infrastructure and do not eliminate the chance of presenting false data at the point of on-boarding.


How Suse is taking open source deeper into the enterprise


What a company like Suse is doing is to help enterprises such as banks, healthcare providers and retail companies match what they’re trying to do with what’s available in the open source world. We select the projects and make sure they can work together with enterprise IT infrastructure, and are stable, secure and supported over time. We’ve started doing that with Linux, OpenStack, Cloud Foundry and Kubernetes. Now, you mentioned Asia. The challenges I mentioned are common to everybody, but what we see in Asia, like in Europe, is that Asia is not a single, homogeneous market. Different countries are in different stages of adopting open source. I spend quite a lot of time in Japan, China, Hong Kong, Singapore, all of which are very different markets. Typically in Japan, enterprises are more conservative so we have a lot of customers like banks that are running Linux on mainframes. Singapore is more innovative, so we see OpenStack being used by the public sector and manufacturing companies.


Understanding the role of governance in data lakes and warehouses

Having data well organized and consistently aggregated allows for the creation of performance and operational metrics – reporting that drives business and allows leaders to make informed decisions. Inclusion of both historical and current information organized in a consistent manner within the data warehouse increases the quality of the viewed data, thus increasing decision-making quality. ... Although they are different, the key to successful data lakes and data warehouses with useful, quality data, is the same – governance. Data governance allows for the understanding of not only what is stored where and its source, but the relative quality of the data and being able to ascertain it consistently. Aside from clarity and structure, governance also allows control. With such control, the organization knows how the data is being used and whether or not it’s meeting its intended purpose. Say the data has been manipulated to meet a set of determined requirements, without data governance, someone else could come along and pull the data – not knowing it had been previously employed – thus resulting in an inaccurate data analysis.


Cybersecurity: Is your boss leaving your organisation vulnerable to hackers?


CEOs and other senior board-level executives are exposing their organisations to cyberattacks and hackers because of a lack of awareness around cybersecurity, a new study has warned. Research by cybersecurity company RedSeal surveyed hundreds of senior IT and security professionals and found that many of these personnel believe there's a disconnect between the CEO and the information security team, which could be putting organisations at risk. ... "CEOs have wide access to their organisation's network resources, the authority to look into most areas, and frequently see themselves as exempt from the inconvenient rules applied to others. This makes them ideal targets," he added. However, despite some having fears around security at the very top of the organisation, on the whole, businesses appear to be taking cybersecurity seriously. Two thirds of businesses say their cyber-incident response plan is well defined and well tested – either via real breaches, or simulation tests. Three quarters of firms also report they have cyber insurance, suggesting there's an awareness around preparing for the aftermath of an incident, should one occur.


To pay or not pay a hacker’s ransomware demand? It comes down to cyber hygiene

CSO  >  ransomware / security threat
According to the FBI and most cybersecurity experts, no one should ever pay ransomware attackers. Giving in to the attackers’ demands only rewards them for their malicious deeds and breeds more attacks, they say. “The FBI encourages victims to not pay a hacker’s extortion demands,” the FBI says in an email to CSO. “The payment of extortion demands encourages continued criminal activity, leads to other victimizations, and can be used to facilitate additional serious crimes.” Jim Trainor, who formerly led the Cyber Division at FBI Headquarters and is now a senior vice president in the Cyber Solutions Group at risk management and insurance brokerage firm Aon, agrees. Trainor, who spent a fair amount of time dealing with ransomware attacks while he was in the Bureau, said his position has not changed. “I would recommend that people not pay the ransom. It’s extremely problematic,” he tells CSO. He conceded that making the determination to pay or not pay the attackers is ultimately a business decision, one that almost always hinges on whether the victim has access to adequate backups.


Government must ‘stop choosing ignorance’ around data


“The National Data Strategy must go beyond public services. Government’s role is broader than the delivery of public services; it can help shape how data is used across the whole of society through interventions such as research funding, procurement rules, regulatory activities and legislation,” the letter stated. “The strategy must recognise this and describe how government will make data work for everyone in the UK,” it added. However, the strategy “must deliver transformative, rather than incremental, change”, the letter stated, adding that the national data plan must be a long-term endeavour for government, with a vision for at least the next decade along with practical steps to turn any future vision into reality. Such ambitions may be unfulfilled if there is a lack of sustained strategic leadership on data, the letter warned. This is an issue that had been previously outlined in a recent report by the National Audit Office (NAO). Echoing the NAO’s concerns, the organisations stated the government must “get leadership from the very top if it is to get a grip on data”.


How digital and marketing executives are taking charge of digital transformation


Brahin says the key to success has been the marketing team's hybrid approach to digital transformation at UBS. Content is at the heart of this approach, where a centralised marketing organisation is helping line-of-business functions to transform the online experiences of clients. "Everything that concerns content delivery into the website and marketing channels is through a single approach, while business units still have control of their products and services. We partner with them to deliver marketing content into their service areas," she says. "It's an approach that has allowed us to create a solid foundation with a powerful content-delivery hub, where we can pump content to individual areas from a single hub. That's worked pretty well for us." The firm has analysed website analytics and used this insight to help deliver "modern, mobile experiences". McBain says the focus recently has been around optimisation and extending its content across new channels, including a recently launched website for the main brand.



Quote for the day:


"Strategy is not really a solo sport, even if you're the CEO." -- Max McKeown


Daily Tech Digest - July 15, 2019

Most Common Security Fails 

Image title
The most common security failure is not having a process. In addition, there’s also a disconnect between the security and compliance regulations that executives focus on, one being HIPPA’s cybersecurity requirement below: 164.306(a)(1) ensure the confidentiality, integrity, and availability of all electronically protected health information the covered entity creates, receives, maintains, or transmits. The above is such a broad and generic statement and that’s just one statement out of a document that has almost a hundred statements, so it’s no longer meaningful. From the perspective of security failures —which ties into the state of security management — there’s an absolute disconnect between high-level frameworks like ISO, COBIT, and HIPAA and how you actually implement them. Many companies today feel they have a framework that they follow, but that’s not a security program; that’s a document that gives guidance, one that doesn’t even give you detail on how to implement said guidance. For example, you start out with a security framework like HIPAA and you use something like the CIS controls to implement the guidance within HIPAA, that’s the second phase.



Why is it so hard to see IoT devices on the network?

Most IoT devices are known for their low CPU, minuscule memory and unique operating system (that often needs to be studied from scratch). Many IoT devices are “protected” by factory-derived usernames and passwords that are rarely changed. Furthermore, these devices are designed to connect to the wireless network, and most won’t function at all without a connection. These challenges make discovering and managing the devices a significant challenge, especially if they aren’t being accounted for as part of IT inventory. To track their presence on the network, IT teams need dedicated visibility tools with a price point that outweighs the relative low cost of adopting the IoT devices themselves. As a result, many IoT devices are given free reign over the network and can’t be seen in regular endpoint or vulnerability scans. You may be thinking that the answer to this challenge lies with the device manufacturers. Indeed, this thinking is correct, but due to a lack of regulation on IoT security, manufacturers are only now starting to realize that a lack of security presents a barrier to implementation.


Leak Confirms Google Speakers Often Record Without Warning

Leak Confirms Google Speakers Often Record Without Warning
Responding to the VRT NWS report, Google says that building technology that can work well with the world's many different languages, accents and dialects is challenging, and notes that it devotes significant resources to refining this capability. "This enables products like the Google Assistant to understand your request, whether you're speaking English or Hindi," David Monsees, Google's product manager for search, says in a Thursday blog post. Google says it reviews about 0.2 percent of all audio snippets that it captures. The company declined to quantify how many audio snippets that represents on an annualized basis. "As part of our work to develop speech technology for more languages, we partner with language experts around the world who understand the nuances and accents of a specific language," Monsees says. "These language experts review and transcribe a small set of queries to help us better understand those languages. This is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant."


Network visibility challenges in modern networks

Organizations should strive for an end-to-end view of the health and operational status of their networks for several reasons. For one, visibility can enhance your ability to troubleshoot problems as they arise. Everything from a downed network and interfaces to operational, yet degraded links can be more quickly identified when monitoring and baselining data flows as they pass through the local area network (LAN), wide area network (WAN) and even out to the internet edge. Another reason for network visibility is to validate performance-based configurations. Visibility can help network managers better understand how network issues affect data on a per-application basis. If specific applications are business-critical, a manager can use configuration techniques, such as quality of service and traffic policing and shaping, to optimize these important data flows. Visibility can then validate that the performance modifications are working or identify when further configuration adjustments are needed.


Billion-dollar privacy penalties put CEOs on notice


The unprecedented penalties imposed on Facebook, Marriott and British Airways should serve as a warning for company leaders, according to Tom Turner, CEO of cyber security ratings firm BitSight. “CEOs around the globe are on notice that they are accountable for cyber security performance management just the same way they are accountable for managing the business,” he said. Commenting on the FTC settlement, Nuala O’Connor, president and CEO of the Center for Democracy & Technology (CDT), said: “The record-breaking settlement highlights the importance of data stewardship in the digital age. “The FTC has put all companies on notice that they must safeguard personal information,” she said, adding that privacy regulation in the US is “broken”. While large after-the-fact fines matter, O’Connor said strong, clear rules to protect consumers are more important, and called on the US Congress to pass a comprehensive federal privacy law in 2019.


How Not to Get Eaten by Technology

Cue Jaws theme song
Effective and smart learning techniques and strategies are required. By effective learning techniques, I mean methods that will help you to identify hot markets, hot technologies, trends, learning how to focus on what matters, learning things quickly, and so on. Specialization became more valuable as the platforms are becoming more sophisticated, so being a “Jack of all trades” is no longer acceptable for many companies since mastering a particular track is a non-trivial time and effort investment. (Well, this is subject to debate!) The software engineering field is becoming a well-paid field, in particular for renowned experts since it is not easy to become a well-versed engineer. Soft skills such as negotiation skills, requirements engineering, time planning, and public speaking are timeless valuable skills that will boost career opportunities. Domain knowledge is always valuable; it's worth it to spend time understanding the business rules, domain language, and concepts on a specific business area you are working in, such as health, HR, banking, etc.


How organizations are bridging the cyber-risk management gap

How to bridge the cyber-risk management gap
OK, so there’s a cyber-risk management gap at most organizations. What are they going to do about it? The research indicates that: 34% will increase the frequency of cyber-risk communications between the CISO and executive management. Now, more communication is a good thing, but CISOs must make sure they have the right data and metrics, and this has always been a problem. I see a lot of innovation around some type of CISO cyber-risk management dashboard from vendors such as Kenna Security, RiskLens (supporting the Factor Analysis of Information Risk (FAIR) standard), and Tenable Networks. Over time, cyber-risk analytics will become a critical component of a security operations and analytics platform architecture, so look for vendors such as Exabeam, IBM, LogRhythm, MicroFocus, Splunk, and SumoLogic to make investments in this area. 32% will initiate a project for sensitive data discovery, classification, and security controls. Gaining greater control of sensitive data is always a good idea, yet many organizations never seem to get around to this.


Software Engineer Charged With Stealing Company Secrets

Xudong Yao, 57, has been indicted on nine federal counts of theft of trade secrets, according to the U.S. Attorney's Office for the Northern District of Illinois, which is overseeing the case along with the FBI. Yao, who also used the first name "William," is believed to be living in China, according to federal prosecutors. During his time with the company, Yao allegedly downloaded thousands of computer files and other documents that contained various company trade secrets and intellectual property, including data related to the system that operates the unnamed manufacturer's locomotives, according to the indictment. While Yao was taking his former employer's intellectual property, he was negotiating for a new job with a firm in China that provided automotive telematics service systems, the Justice Department alleges. Yao was born in China, but he's a naturalized U.S. citizen, according to the FBI. Theft of trade secrets is a federal crime that carriers a possible 10-year prison sentence for each count, according to the Justice Department.


IT-Based Attacks Increasingly Impacting OT Systems: Study

IT-based attacks increasingly impacting OT systems: Study - CIO&Leader
While IT systems have been standardized for many years on the TCP/IP protocol, OT systems use a wide array of protocols, many of which are specific to functions, industries, and geographies. The OPC Foundation was established in the 1990s as an attempt to move the industry toward protocol standardization. OPC’s new Unified Architecture (OPC UA) has the potential to unite protocols for all industrial systems, but that consolidation is many years away due to the prevalence of legacy protocols and the slow replacement cycle for OT systems. Cyber criminals have actively attempted to capitalize on this confusion by targeting the weak links in each protocol. These structural problems are exacerbated by the lack of standard protections and poor security hygiene practiced with many OT systems—a legacy of the years when they were air gapped. Figure 2 shows the number of unique threats targeting machines using specific ICS/SCADA protocols. Despite seasonal fluctuations and a wide variety of targets, the data is clear on one thing: IT-based attacks on OT systems are increasing.


4 steps to reaping the software benefits of continuous testing

Evolving your software development process to include continuous testing is a necessary investment so your team’s software is high quality and delivered quickly. Testing mitigates the risk of poor software quality by identifying defects before there is impact to your customers, your business operations and your revenue. Testing is good, but continuous testing is better, because you can find and fix more defects sooner to avoid the accumulation of technical debt. Software technical debt includes defects in your software not yet discovered plus the backlog of lower priority defects waiting to be fixed. Technical debt adds to the complexity and cost of maintaining your software over time. With CT, you can take action immediately to fix defects to avoid adding to your technical debt. CT decreases the time and cost to fix a defect. Research by Perfecto disclosed that a software defect fixed on the same day it was detected took only one hour, while it took eight hours to fix if detected at the end of a two-week sprint.



Quote for the day:


"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard