Daily Tech Digest - December 03, 2019

Insider risk management – who’s the boss?

staring at the boss
The CRO may be the best person to lead the ITP. This largely depends, however, on the scope and role of the CRO itself. Some CROs focus only on the strategic risk of the company. They set organizational risk tolerances and may develop methodologies for capturing and measuring risk postures. In this model, the operational risk is still wholly “owned” by the operational leaders (CSO, CISO, business units, etc.). CROs that fall into this category are not well positioned to lead an ITP because they lack the visibility and operational granularity required for an ITP. Other CROs, however, focus on both strategic and operational risk of the company. They not only set organizational risk tolerances, but also are involved in measuring, managing, and improving the operational risk posture of the organization. CROs in this group are well positioned to lead the ITP. They will often have the necessary high-level authority (report to CEO, Audit Committee, etc.) and by virtue of their scope, will also have the necessary relationships across all functions of the organization (business units, legal, HR, CSO, CISO, etc.).



Redgate’s journey to DevOps

While Redgate had a culture that was favorable towards DevOps, introducing it was a different story. The software development teams were eager to move to the shorter development cycles and continuous iteration of development and testing that DevOps promotes, but new Agile processes and practices had to be adopted to make it happen. The question was, which processes and practices? Scrums? Kanban boards? A3s? Standups? Burndown charts? The Deming Cycle? Monthly releases? Weekly releases? Pair programming? Mob programming? Extreme programming? Trunk-based development? Continuous delivery or continuous deployments? As you can see, there are many aspects to Agile so the first job was to understand them and see which could – and should – be implemented at Redgate. In 2008, the first project to use Scrum began at Redgate. The Agile technique breaks down work into goals that can be completed within a fixed time period of one month or two weeks. At the end of each of these sprints, the ideal is to have software ready to release.


Why you need to pay more attention to combatting AI bias


While managing AI-driven functions within an enterprise can be valuable, it can also present challenges, the DataRobot report said. "Not all AI is treated equal, and without the proper knowledge or resources, companies could select or deploy AI in ways that could be more detrimental than beneficial." The survey found that more than a third (38%) of AI professionals still use black-box AI systems--meaning they have little to no visibility into how the data inputs into their AI solutions are being used. This lack of visibility could contribute to respondents' concerns about AI bias occurring within their organization, DataRobot said. AI bias is occurring because "we are making decisions on incomplete data in familiar retrieval systems,'' said Sue Feldman, president of the cognitive computing and content analytics consultancy Synthexis. "Algorithms all make assumptions about the world and the priorities of the user. That means that unless you understand these assumptions, you will still be flying blind." This is why it is important to use systems that include humans in the loop, instead of making decisions in a vacuum, added Feldman, who is also co-founder and managing director of the Cognitive Computing Consortium. They are "an improvement over completely automatic systems," she said.



How to Integrate Infosec and DevOps Using Chaos Engineering

D.I.E. is an acronym where D is for distributed, meaning that service outages, like a denial of service, are less impactful. I is for immutable, meaning that changes are more comfortable to detect in reverse. And E is for ephemeral, where users try to reduce the value of assets as close to zero from the attackers' perspective. These system properties are what chaos security principles will help to build secure systems by design. Starting with the expectation that security controls will fail, and organizations must prepare accordingly. Then, embrace the ability to respond to security incidents instead of avoiding them. Shortridge recommended using game days to practice potentially risk scenarios in a safe environment. Moreover, she recommends using production-like environments to have a better understanding of how things will work in a complex system. Also, Shortridge recommends starting with simple testing before moving on to more sophisticated testing. For instance, build tests that users can run effectively with accessible scenarios, something like phishing or SQL injections.


RT? – Making Sense of High Availability

https://mathequality.files.wordpress.com/2014/01/math-meme-math-test-easy-or-wrong.png
Monitoring is the cornerstone of your RTO target. If you don’t know there is a problem, you can’t find it. Many blogs and articles will focus on the next 3 parts, but let’s be honest, if you don’t know there’s a problem, you can’t respond. If your logs operate on a 5-minute delay, then you need to factor in the 5 minutes into your RTO. From there the next piece is response time. And I mean this in the true sense of how quickly can you trigger a failover to your DR state. How quickly can you triage the problem and respond to the situation? The best RTO targets leverage as much automation as possible here. Next, by looking at data replication, we can ensure that we are able to bring back up any data stores quickly and maintain continuity of business. This is important because every time we have to restore a data store, that takes time and pulls out our RTO. If you can failover in 2 minutes it doesn’t do you much good if it takes 20 minutes to get the database up. Finally, failover. If you are in a state where you need to failover, how long does that take and what automation and steps can you take to shorten that time significantly.


Working with Identity Server 4

Identity Server 4 is the tool of choice for getting bearer JSON web tokens (JWT) in .NET. The tool comes in a NuGet package that can fit in any ASP.NET project. Identity Server 4 is an implementation of the OAuth 2.0 spec and supports standard flows. The library is extensible to support parts of the spec that are still in draft. Bearer JWT tokens are preferable to authenticate requests with a backend API. The JWT is stateless and aids in decoupling software modules. The JWT itself is not tied to the user session and works well in a distributed system. This reduces friction between modules since it does not share dependencies like a user session. In this take, I’ll delve deep into Identity Server 4. This OAuth implementation is fully compatible with the spec. I’ll start from scratch with an ASP.NET Web API project using .Net Core. I’ll stick to the recommended version of .NET Core, which is 3.0.100 at the time of this writing. You can find a working sample of the code here. To begin, I’ll use CLI tools to keep the focus on the code without visual aids from Visual Studio.


The IT4IT standard was conceived of more than eight years ago by a small group of European companies that saw the need for normative guidance to direct functionality and interoperability for large, multi-vendor IT management software portfolios. Each had tried to create a tool orchestration and interoperability architecture themselves, at great cost. Lesson learned: Their solutions were very similar and, in fact, just the kind of thing that should be a general solution or standard, not proprietary or unique to one company. Supported by HP Software, they worked together as a consortium to merge their individual efforts into a common model that could stand as a universally available normative standard for the industry. This effort resulted in IT4IT version 1.0. At that point the IP was donated to The Open Group, an organization known for its management of several industry standards such as UNIX, TOGAF and others. The private consortium became the IT4IT Forum and their architecture evolved into the publicly available IT4IT Reference Architecture standard.


Menlo Security CEO on what small companies should know about cybersecurity

We've seen two things happen. One, probably over the last 10 years — security budgets have probably tripled, if not more. So security has become much more front of mind for the CIO and boards as we keep reading about these high-profile breaches that end up causing a lot of damage and reputation loss for the companies that were breached. And in that same timeframe that budgets have gone 3X, I would say that the number of infections has probably risen by a factor of three as well, if not more. And that's counterintuitive, because normally the more you invest in a certain solution set, the better results you get. So the fact that it's not working is, I'd say, kind of the big challenge — and people miss that. They keep investing in the same concepts, the same solutions, the same vendors. ... There wasn't a great understanding of just how bad the threat could be. But I think we've seen enough cyber incidents in the headlines, including some high-profile events like affected our U.S. elections and various things like that.


New Android bug targets banking apps on Google Play store

As Promon describes it, StrandHogg allows a malicious app masquerading as a legitimate one to ask for certain permissions, including access to SMS messages, photos, GPS, and the microphone. Unsuspecting users approve the requests, thinking they're granting permission to a legitimate app and not one that's fraudulent and malicious. When the user enters the login credentials within the app, that information is immediately sent to the attacker, who can then sign in and control sensitive apps. The vulnerability itself lies in the multitasking system of Android, Promon's marketing and communication director, Lars Lunde Birkeland, said. The exploit is based on an Android control setting called "taskAffinity," which allows any app, including malicious ones, to freely assume any identity in the multitasking system, Birkeland said. A specific malware sample analyzed by Promon was not on Google Play but was instead installed through dropper apps and hostile downloaders available on Google's mobile app store, according to Promon. Such apps either have or pretend to have the features of games, utilities, and other popular apps but actually install additional apps that can deploy malware or steal user data.



Traditionally a threat actor might take over an email account and send a message internally about making a wire transfer or deposit to some “new vendor.” As BEC became more popular over the last few years, criminals recognized they could add legitimacy to their phony calls-to-action by sending them from an actual vendor’s account, resulting in what’s being called Vendor Email Compromise. The first step is hijacking a corporate account; the second is re-routing funds from that organization’s customers into criminal-controlled accounts, under the guise of a transaction problem or account change. Enterprises can empower suppliers to prevent this fraud and associated damages. Sharing account exposure data directly with suppliers through your vendor risk management solution is the most efficient way to convey a sense of urgency for remediating the issues that put you both at risk, and seeing their actual risk data points their security team in the right direction. Alternatively, security teams can regularly check recovered breach data for email addresses connected to their suppliers, and share that information manually with them, though this could quickly become quite cumbersome.



Quote for the day:


"Making good decisions is a crucial skill at every level." -- Peter Drucker


No comments:

Post a Comment