Showing posts with label continuous intelligence. Show all posts
Showing posts with label continuous intelligence. Show all posts

Daily Tech Digest - July 18, 2022

Cyber Safety Review Board warns that Log4j event is an “endemic vulnerability”

According to the report, "The pace, pressure, and publicity compounded the defensive challenges." As a result, researchers found additional vulnerabilities in Log4j, contributing to confusion and "patching fatigue," and "responders found it difficult to find authoritative sources of information on how to address the issues. This frenetic period culminated in one of the most intensive cybersecurity community responses in history." ... The few organizations that responded effectively to the event "understood their use of Log4j and had technical resources and mature processes to manage assets, assess risk, and mobilize their organization and key partners to action. Most modern security frameworks call out these capabilities as best practices." ... A fog still hovers over the event because, "No authoritative source exists to understand exploitation trends across geographies, industries or ecosystems. Many organizations do not even collect information on specific Log4j exploitation, and reporting is still largely voluntary. Most importantly, however, the Log4j event is not over."


DTN’s CTO on combining IT systems after a merger

Enterprises often make strategic errors when combining IT systems following an acquisition, Ewe says. “The number one mistake I see is, ‘Since we acquired you, clearly we win,’” he says. “Just because A bought B, you don’t want to assume that A has better technology than B.” Another common mistake is to go solely by the numbers, picking one company’s IT system over the other’s because it has the highest revenue or profitability, he says: “The issue there is that you’re oversimplifying the process.” Given the investment in time and money necessary to merge two companies’ IT systems, “it’s worthwhile spending an extra few weeks up-front to make a more thorough analysis of which solution or which pieces of which solutions should come together,” Ewe says. Jumping straight in and making a wrong decision can cost more in the long term. Ewe consulted with product and sales management, and with customers, to identify the needs DTN’s single engine would have to satisfy, as well as the use cases it would serve, before evaluating the existing assets against those needs. 


Ransomware and backup: Overcoming the challenges

Recovering data after a ransomware attack is more complex and more risky than recovery from a system outage or natural disaster. The greatest risk is that backups contain undetected ransomware, which then replicate into the production system or recovered systems. This risk is reduced by using air-gapped copies and immutable copies and snapshots, and keeping more copies than would be required for conventional backup alone. This requires a more cautious approach to data recovery, and one that can be at odds with the commercial pressures for short RTOs and recent RPOs. Matters are made more difficult because there are no viable, fool-proof systems that can scan data for ransomware before it is backed up, says Barnaby Mote, managing director at backup specialist Databarracks. “Before ransomware was a thing, replicating data from production systems to DR as quickly as possible was a sound recovery strategy for conventional disasters,” he says. “Now, with ransomware, it has the opposite of the desired effect, rendering recovery systems unusable.”


Continuous Intelligence: Definition, Benefits, and Examples

While humans cannot inspect every possible characteristic and combination in the flood of incoming data, machines can. Complementing analytics that provide precise answers to questions users know to ask, a machine can continuously monitor data in the background to detect unknown correlations and trends that deviate from what would have been expected by the system based on previous observations. This way, companies can identify hidden, but potentially relevant signals in the data. Gartner predicts that by 2022, more than half of major new business systems will incorporate continuous intelligence capabilities. By integrating artificial intelligence (AI)-based continuous intelligence into their day-to-day operations, companies can:Boost efficiency by spending less time sifting through data from a variety of disparate sources; Focus on what really matters for their business; Speed time to action. By automatically inspecting critical business health indicators such as revenue, Web page views, active users, or transaction volume in real time, businesses can accelerate their time to insight and action and better respond to situations before business is impacted.


7 reasons Java is still great

As a longtime Java programmer, it was surprising—astonishing, actually—to watch the language successfully incorporate lambdas and closures. Adding functional constructs to an object-oriented programming language was a highly controversial and impressive feat. So was absorbing concepts introduced by technologies like Hibernate and Spring (JSR 317 and JSR 330, respectively) into the official platform. That such a widely used technology can still integrate new ideas is heartening. Java's responsiveness helps to ensure the language incorporates useful improvements. it also means that developers know they are working within a living system, one that is being nurtured and cultivated for success in a changing world. Project Loom—an ambitious effort to re-architect Java’s concurrency model—is one example of a project that underscores Java's commitment to evolving. Several other proposals currently working through the JCP demonstrate a similar willingness to go after significant goals to improve Java technology. The people working on Java are only half of the story. The people who work with it are the other half, and they are reflective of the diversity of Java's many uses.


Search Here: Ransomware Groups Refine High-Pressure Tactics

Ransomware groups continue to refine the tactics they use to better pressure victims into paying. And they're succeeding. "In recent months, we have seen an increase in the number of ransomware attacks and ransom amounts being paid," the heads of Britain's lead cybersecurity agency and privacy watchdog warned last week in an open letter to the legal industry. The impetus for the alert from Britain's National Cyber Security Center - the public-facing arm of intelligence agency GCHQ - and the Information Commissioner's Office: They're urging solicitors to never advise clients to pay a ransom. Doing so will not lessen any penalties the ICO might levy, helps perpetuate the ransomware business model and could violate U.S. sanctions, they say. But the increase in ransoms being paid speaks to the success of ransomware groups' continuing innovation. Psychological pressure remains a specialty. After infecting systems, many types of ransomware reboot infected PCs to a lock screen that lists the ransom demand, a cryptocurrency wallet address for routing funds and a countdown timer. 


Functional programming is finally going mainstream

For some, using an object-oriented language like Java, JavaScript, or C# for functional programming can feel like swimming upstream. “A language can steer you towards certain solutions or styles of solutions,” says Gabriella Gonzalez, an engineering manager at Arista Networks. “In Haskell, the path of least resistance is functional programming. You can do functional programming in Java, but it’s not the path of least resistance.” A bigger issue for those mixing paradigms is that you can’t expect the same guarantees you might receive from pure functions if your code includes other programming styles. “If you’re writing code that can have side effects, it’s not functional anymore,” Williams says. “You might be able to rely on parts of that code base. I’ve made various functions that are very modular, so that nothing touches them.” Working with strictly functional programming languages makes it harder to accidentally introduce side effects into your code. “The key thing about writing functional programming in something like C# is that you have to be careful because you can take shortcuts and then you’ve got the exact sort of mess you would have if you weren’t using functional programming at all,” Louth says.


Safeguarding the open source model amidst big tech involvement

Two of the main techniques to safeguard open source and its community are through smart licensing tactics and constant innovation. The first technique is to simply switch the project licence from an open source licence to a more restrictive licence. There are two specific licences that can be used to protect against clouds and corporations: AGPL-3 and SSPL — specifically developed by the likes of MongoDB, Elastic and Grafana to protect themselves from AWS. For instance, while many projects shifted away from GPL-style licences towards more permissive forms of licensing, under GPL, contributors are required to make their code available to the open source community; the so-called “copyleft”. This traditional licensing style helps to create a more open, transparent ecosystem. Another way in which open source can safeguard its future is through smart innovations. Constantly innovating in order to satisfy users should be the way forward for the evolution of open source projects and solutions. This would enable companies to maintain their competitive edge and keep up with technological trends. 


5 ways fear can derail your digital transformation strategy

When we confront new work technologies such as a hybrid workplace, virtual meeting rooms, or new software, we tend to resist or avoid them simply because they’re new and we’re not used to them. This creates division. A company looking to offer a hybrid workplace might encounter resistance from employees, managers, and even customers who refuse to recognize this arrangement. What appears to be a simple reluctance to change is actually a deep-seated fear of changing a comfortable status quo. What you can do about this: Offer facts to neutralize fear. People often use their own frame of reference if they are not given something tangible to hold on to. If the change involves new technology, demonstrate the technology. Let them see how it works. If the change is organizational, such as a hybrid workspace, present the facts about how it will work, what will change, and what will stay the same. Listen to and respond to their questions and objections. Humans are dominated by emotion, and logic is always playing catch-up. 


The Four P's of Pragmatically Scaling Your Engineering Organization

Your people aren’t just the heart and soul of the company, they’re the building blocks for its future. When you're growing rapidly it can be tempting to add developers to your team as quickly as possible, but it's important to first consider your company goals while remaining practical about how you’re scaling. This is the key foundation for building the right organization. ... Scaling your processes comes down to practical prioritization. It is crucial to clearly establish processes that balance both short- and long-term wins for the company, beginning with the systems that need to be fixed immediately. Start by instituting a planning process looking at things from both an annual perspective and quarterly, or even monthly– and try not to get bogged down deliberating over a planning methodology in the first stage. ... Scaling the platform is often the biggest challenge organizations face in the hyper-growth phase. But it’s important to remember that building toward a north star doesn’t mean that you’re building the north star. Now is the time to focus on intentional, iterative improvement of the platform rather than implementing sweeping changes to your product.



Quote for the day:

"It is one thing to rouse the passion of a people, and quite another to lead them." -- Ron Suskind

Daily Tech Digest - March 09, 2020

Can Continuous Intelligence and AI Predict the Spread of Contagious Diseases?


Past efforts to model the spread of contagious diseases may have made false assumptions about the data they relied on? Does the fact that many people in one geographic region search for the name of an emerging contagious disease mean the disease is present and growing? Perhaps, perhaps not. The danger is relying on coincidences and not linking cause to effect. Did past and current efforts have all the data they needed? One issue with forecasting the spread of a disease is that models might not have accurate data. The issue is especially relevant at the onset of new diseases. It is quite easy to blur flu-like symptoms in patients. Doctors may not know the symptoms of a disease at its onset, or they may make inaccurate diagnoses. Are the models based on the right science? At the early stage of investigating a newly found disease, even basic information, like how a disease spreads, is unknown. Is it airborne? Does it spread via exposure to blood or other bodily fluids? What’s the incubation period? Such mechanisms need to be nailed down before predictions can be made.



Out at Sea, With No Way to Navigate: Admiral James Stavridis Talks Cybersecurity

We're still figuring out how this is going to work. To shift metaphors to the oceans, it's as though we're out at sea, we're in a bunch of boats, but we haven't really put in place buoys and navigational aids, and we haven't really defined who's going to protect us. So if if I'm a commercial ship at sea, I know the US Navy is going to come and defend me if I'm an American ship and I'm under attack. And in fact, we actively discourage merchant ships from mounting their own defenses. The defense requirements, I think, ought to be vested in the state. But in the world of cyber, realistically, if you're a commercial entity, particularly a target-rich kind of environment like financials or critical infrastructure, say electric grid, the government so far has not really stepped up to that task of broadly protecting you. Yeah, you can get some help from the NSA and some help from the FBI and some help from the CIA. But broadly speaking, you are going to have to have some mechanisms, at least on the detection and on the defensive side.


Containers march into the mainstream

Containers march into the mainstream
A decade ago, Solomon Hykes’ invention of Docker containers had an analogous effect: With a dab of packaging, any Linux app could plug into any Docker container on any Linux OS, no fussy installation required. Better yet, multiple containerized apps could plug into a single instance of the OS, with each app safely isolated from the other, talking only to the OS through the Docker API. That shared model yielded a much lighter weight stack than the VM (virtual machine), the conventional vehicle for deploying and scaling applications in cloudlike fashion across physical computers. So lightweight and portable, in fact, that developers could work on multiple containerized apps on a laptop and upload them to the platform of their choice for testing and deployment. Plus, containerized apps start in the blink of an eye, as opposed to VMs, which typically take the better part of a minute to boot. To grasp the real impact of containers, though, you need to understand the microservices model of application architecture. Many applications benefit from being broken down into small, single-purpose services that communicate with each other through APIs, so that each microservice can be updated or scaled independently.


Democratizing data, thinking backwards and setting North Star goals

Essentially, database is a fairly old technology, but it has always been about three things. One thing is value. How do you get the best out of your data, which is, what are the features that you provide, the power of querying the data, of updating it, of correlating it, and doing things with the data? The second thing has been security. How do you make sure that the data stays under your control, that you own it and determine what happens with the data? And the third is, I would call it cost or performance, is making sure that you don’t overpay for the data, right? That it’s kind of cheap to, or kind of gets more and more affordable, to do what you want to do with your data and control it. ... The best way to process data is if it’s really structured and you know exactly what it is, right? And you have a schema, essentially. And I spent a lot of time working on semi-structured data, which has some structure that you kind of extract and that is kind of like getting good value out of all data, not just your structured data like your bank accounts, but also your email, the books you write, the word documents you write, getting some value out of that.


Artificial intelligence and machine learning an essential part of cybersecurity


World Wide Technology also plans to use AI and ML this year as part of its cybersecurity plans, according to chief technology advisor Rick Pina. "In today's digital age, the security of data, applications, and processes is of the utmost importance; and AI and ML now play an integral part in this cybersecurity process. AI and ML have brought enticing new prospects for speed, accuracy, and connectivity to the public and private sectors, allowing government agencies and corporate organizations to make great strides in governed self-service access, alongside data security and reliability," Pina said. ... Michael Hanken, vice president of IT at Multiquip, said he isn't planning to use AI and ML yet, but he is researching its benefits and limits to see how it might work in conjunction with cybersecurity in the future. Dan Gallivan, director of IT for Payette, said, "AI and ML are not part of the official plan this year but I do feel they are in the not too distant future as we learn more about artificial intelligence and machine learning development capabilities and then experiment with them in cybersecurity."


7 Cloud Attack Techniques You Should Worry About

(Image: Adam121 - stock.adobe.com)
As organizations transition to cloud environments, so too do the cybercriminals targeting them. Learning the latest attack techniques can help businesses better prepare for future threats. "Any time you see technological change, I think you certainly see attackers flood to either attack that technological change or ride the wave of change," said Anthony Bettini, CTO of WhiteHat Security, in a panel at last week's RSA Conference. It can be overwhelming for security teams when organizations rush headfirst into the cloud without consulting them, putting data and processes at risk. Attackers are always looking for new ways to leverage the cloud. Consider the recently discovered "Cloud Snooper" attack, which uses a rootkit to bring malicious traffic through a victim's Amazon Web Services environment and on-prem firewalls before dropping a remote access Trojan onto cloud-based servers. As these continue to pop up, many criminals rely on tried-and-true methods, like brute-forcing credentials or accessing data stored in a misconfigured S3 bucket. There's a lot to keep up with, security pros say.


Robotic Process Automation Implementation Choices


The first step in implementing RPA is identifying tasks that lend themselves to automation. There are some common characteristics to look for even though RPA application areas cut across broad swaths of organizations. Specifically, IBM notes that an “RPA-ready” application is one that is: Simple, consistent, and repeatable; Repetitive low-skill tasks that create human issues such as high error rates and low worker morale; Existing or planned processes where stripping off routine tasks can free humans and deliver significant productivity, efficiency, or cost benefits; and Tasks that offer meaningful opportunities to improve customer and worker experiences by speeding up existing processes. Some tasks may meet many of these criteria but still not be suitable for RPA. For example, a task may meet every criterion, but if the task requires additional data capture capabilities or a redesign of the process, RPA may not be the right fit. RPA can be applied to a very broad range of tasks across most industries.


Android security warning: One billion devices no longer getting updates


All of the phones in the tests were infected successfully by Joker – also known as Bread – malware. Every single device tested was also infected with Bluefrag, a critical vulnerability that focuses on the Bluetooth component of Android. Which? said there should be greater transparency around how long updates for smart devices will be provided so that consumers can make informed buying decisions, and that customers should get better information about their options once security updates are no longer available. The watchdog also said that smartphone makers have questions to answer about the environmental impact of phones that can only be supported for three years or less. Google told ZDNet: "We're dedicated to improving security for Android devices every day. We provide security updates with bug fixes and other protections every month, and continually work with hardware and carrier partners to ensure that Android users have a fast, safe experience with their devices." When operating systems and security updates are delivered varies depending on the device, manufacturer and mobile operator. Because smartphone makers will tweak bits of the Android operating system, they often deploy patches and updates at a slower pace than Google does on its own devices, or not at all.


The Dark Side of Microservices

From a technical perspective, microservices are strictly more difficult than monoliths. However, from a human perspective, microservices can have an impact on the efficiency of a large organization. They allow different teams within a large company to deploy software independently. This means that teams can move quickly without waiting for the lowest common denominator to get their code QA’d and ready for release. It also means that there’s less coordination overhead between engineers/teams/divisions within a large software engineering organization. While microservices can make sense, the key point here is that they aren’t magic. Like nearly everything in computer science, there are tradeoffs — in this case, between technical complexity for organizational efficiency. A reasonable choice, but you better be sure you need that organizational efficiency, for the technical challenges to be worth it. Yes, of course, most clocks on earth aren’t moving anywhere near the speed of light. Furthermore, several modern distributed systems, rely on this fact by using extremely accurate atomic clocks to sidestep the consensus issue.


Essential things to know about container networking

IDG Tech Spotlight  >  Containers + Virtualization [ Network World / March 2020 ]
Choosing the right approach to container networking depends largely on application needs, deployment type, use of orchestrators and underlying OS type. "Most popular container technology today is based on Docker and Kubernetes, which have pluggable networking subsystems using drivers," explains John Morello, vice president of product management, container and serverless security at cybersecurity technology provider Palo Alto Networks. "Based on your networking and deployment type, you would choose the most applicable driver for your environment to handle container-to-container or container-to-host communications." "The network solution must be able to meet the needs of the enterprise, scaling to potentially large numbers of containers, as well as managing ephemeral containers," Letourneau explains. The process of defining initial requirements, determining the options that meet those requirements, and then implementing the solution can be as important choosing the right orchestration agent to provision and load balance the containers. "In today's world, going with a Kubernetes-based orchestrator is a pretty safe decision," Letourneau says.



Quote for the day:


"Leadership without mutual trust is a contradiction in terms." -- Warren Bennis