Showing posts with label ZeroOps. Show all posts
Showing posts with label ZeroOps. Show all posts

Daily Tech Digest - May 11, 2023

Will Rogue AI Become an Unstoppable Security Threat?

The rogue AI concept generally refers to AI systems that have been trained to generate or identify opportunities to exploit code or system vulnerabilities and then take some form of destructive action without human intervention, Saylors says. That action could be the creation of code known to be vulnerable and publishing it to a common code repository with the expectation it would be exploited at a later date. It could also be the active exploitation of vulnerabilities by the AI technology itself. The latter action is an extreme example, Saylors says, and generally only a concern for governments or high-profile enterprises, such as defense contractors and financial institutions. “Such organizations already tend to be under constant attack from well-funded APT groups,” he notes. Unfortunately, as sophisticated AI technologies such as ChatGPT become widely available, they will be trained to exploit code or system vulnerabilities. “I’m not saying ChatGPT, specifically, will do this, but I’m suggesting that bad actors will clone this type of technology and train it for nefarious use,” Saylors says.


Generative AI Will Transform Software Development. Are You Ready?

The coming convergence of generative AI and software development will have broad implications and pose new challenges for your IT organization. As an IT leader, you will have to strike the balance between your human coders—be they professionals or cit-devs—and their digital coworkers to ensure optimal productivity. You must provide your staff guidance and guardrails that are typical of organizations adopting new and experimental AI. Use good judgment. Don’t enter proprietary or otherwise corporate information and assets into these tools. Make sure the output aligns with the input, which will require understanding of what you hope to achieve. This step, aimed at pro programmers with knowledge of garbage in/garbage out practices, will help catch some of the pitfalls associated with new technologies. When in doubt give IT a shout. Or however you choose to lay down the law on responsible AI use. Regardless of your stance, the rise of generative AI underscores how software is poised for its biggest evolution since the digital Wild West known as Web 2.0.


AI outcry intensifies as EU readies regulation

AI offers both the potential to grow the business and a significant risk by eroding a company’s unique selling point (USP). While business leaders assess its impact, there is an outcry from industry experts and researchers, which is set to influence the direction future AI regulations take. In an interview with the New York Times discussing his decision to leave Google, prominent AI scientist Geoffory Hinton warned of the unintended consequences of the technology, saying: “It is hard to prevent bad actors from doing bad things.” Hinton is among a number of high-profile experts voicing their concerns over the development of AI. An open letter, published by the Future of Life Institute, has over 27,000 signatories calling for a pause in the development of AI, among them Tesla and SpaceX founder, Elon Musk – who, incidentally, is a co-founder of OpenAI, the organisation behind ChatGPT. Musk has been openly critical of advancement such as generative AI, but he is reportedly working on his own version. According to the Financial Times, Musk is bringing together a team of engineers and researchers to develop his own generative AI system and has “secured thousands of high powered GPU processors from Nvidia”.


Refined methodologies of ransomware attacks

“Rates of encryption have returned to very high levels after a temporary dip during the pandemic, which is certainly concerning. Ransomware crews have been refining their methodologies of attack and accelerating their attacks to reduce the time for defenders to disrupt their schemes,” said Chester Wisniewski, field CTO, Sophos. ... “With two thirds of organizations reporting that they have been victimized by ransomware criminals for the second year in a row, we’ve likely reached a plateau. The key to lowering this number is to work to aggressively lower both time to detect and time to respond. Human-led threat hunting is very effective at stopping these criminals in their tracks, but alerts must be investigated, and criminals evicted from systems in hours and days, not weeks and months. Experienced analysts can recognize the patterns of an active intrusion in minutes and spring into action. This is likely the difference between the third who stay safe and the two thirds who do not. Organizations must be on alert 24×7 to mount an effective defense these days,” said Wisniewski.


Automation: 3 ways it boosts productivity and reduces burnout

When we automate, we can carve out more time for the big stuff—and the more time we spend on the big stuff, the more engaged we become. Engaged employees aren’t just happier; they also create better customer experiences. Companies, in turn, can charge more for their services. The bottom line: Higher engagement is a win for everyone—companies, customers, and employees alike. To identify your most meaningful work, ask yourself what you enjoy doing the most and what delivers the most impact. For me, that’s writing and high-level strategizing. For a journalist, it might be drafting compelling narratives. For a designer, it might be brainstorming creative and beautiful ways to solve a customer’s problem. ... The benefits of automation are multifold: It increases engagement and productivity; it overcomes human limitations like the need to rest because with automation you set it and forget it; it minimizes errors; and it establishes processes that can be consistently refined. This list is not exhaustive. But here’s the rub: Automation can’t be established in a vacuum. 


NoOps vs. ZeroOps: What Are the Differences?

ZeroOps works from the philosophy that a company’s IT team is uniquely positioned to create innovation that services the organization — if it has time to think, rather than constantly chasing tickets or dealing with upkeep, that is. With more time free, IT teams might create new infrastructure that provides enhanced performance for specific corporate applications or might suggest ways in which current applications can be improved. The opportunities are limitless — if only operations teams had the time to do what they need to be doing! And with ZeroOps, they finally can. A ZeroOps provider works with the IT team to create an environment that is ideally suited to the organization, but in which the ZeroOps provider uses a combination of intelligent automation and remote support to relieve the IT team of the general burden of ensuring the system runs properly. Removing these burdens from a team’s shoulders allows them to place focus back on where it should have been in the first place. In other words, innovation and creation are actually possible again, instead of being bogged down by the backlog of things to do to keep everything running.


Quantifying the Value of Data to Business Leaders

The ROI of data is frequently obscured when critical data points fail to form a bigger picture, said Soares. For example, a modest profit from a particular business asset might not be tracked against a long-enough timescale to warrant its initial price tag. ... How is it possible to change business culture to recognize the true value of data? Soares suggested that there is an ultimately simple way to begin benchmarking across companies to assign data value without resorting to “voodoo economics.” “The value of a company’s data divided by the value of the company is what we call a data monetization index,” noted Soares. “And we have another metric called intangible asset index.” Data-related intangibles include customer data, employee data, reference data, reports, critical data elements, and more. How does one identify a critical data element? Soares estimates that roughly 10% of corporate data would fall under this category, though this number is contextual: What may be critical for one application may not be critical for another. 


Does Your Organization Need a CISO or an External Advisor?

The question on every leader’s mind now is, what is the best way to prepare? Should businesses hire a Chief Information Security Officer (CISO), or incorporate an advisor to the organization's board? Based on our work, we have several recommendations to navigate the best option for your organization: Each business context requires a different cybersecurity strategy. Factoring in the types of threats faced and their level of criticality is also key in the decision-making process. The different types of threats may include manufacturing facilities, high value IP (next generation tech, in particular if related to communications or weapons), infrastructure (e.g., energy generation or distribution), ransomware targets, and exploitation opportunities. Being open to exploring hybrid models can be a way to avoid missteps. What level of sophistication does your organization need in a CISO or advisor? Companies with low threat levels (are there any left?) or limited resources may want to rely on external vendors and advisors at early stages on their cybersecurity journey, rather than hiring a CISO immediately.


4 strategies for embracing ‘Everywhere Work’ in 2023

“When it comes to how and where employees work – leaders who do not embrace and enable flexibility where they can – also risk not reaping the benefits of a more engaged, more productive workforce,” said Jeff Abbott, CEO at Ivanti. Attracting and retaining the very best talent will always be an executive priority, but the organisations that embrace an Everywhere Work mindset – and supporting tech stack – will have a sustainable competitive advantage. There has been a seismic shift in how and where employees expect to get work done and it's imperative for leaders to break down culture and tech barriers to enable it.” As employees strive to strike a balance between work and personal life, they are pushing for new ways of working that help them reduce long commutes and minimise the negative impact on their health and well-being. Unfortunately, many employers are still hesitant to fully embrace virtual work arrangements, treating them as temporary solutions that may be reversed in the future. This reluctance to embrace remote work has led to widespread burnout and disengagement among knowledge workers, particularly younger employees.


Introducing the Data Trust Index: A New Tool to Drive Data Democratization

Data quality frameworks have traditionally focused solely on technical data quality dimensions; the Data Trust Index places a heavy emphasis on the social trust component of confirmability to account for the emotional and cultural factors that shape how people perceive and interact with data in their organizations. The adoption and implementation of data quality frameworks have typically been regarded as the necessary step for any organization wishing to promote data democratization. Good quality data will increase use of the data, or so the logic goes. Our conviction is that a data quality framework is only the necessary first step, that true data democratization requires a holistic approach that appeals to both the logical and emotional sides of people. The Data Trust Index brings data trust out of the realm of sterile dashboards and into something tangible that instills confidence in data and helps create a culture of trust around data. We developed the critical components of the Trust Framework (Credibility, Consistency, Confirmability) over many conversations about what was working and what wasn’t for our clients seeking benefits out of investments in data.



Quote for the day:

"To be successful, you have to have your heart in your business, and your business in your heart." -- Thomas Watson, Sr.

Daily Tech Digest - January 14, 2023

How to build the most impactful engineering team without adding more people

Teams celebrate a 10% improvement in efficiency when they should be looking for a 10x improvement in efficiency. Identify key moments in your product lifecycle when it makes sense to step back and identify the substantial changes that can supercharge productivity. My company builds connectors into a huge variety of data sources. At one time, we were writing 5,000 lines of code to create a single connector, which was not sustainable. Now, a single engineer can build a connector in a week with 100 lines of code. We achieved this by designing a new development framework that allows us to exploit commonalities across the connectors we build and by greatly reducing dependencies among engineers. As soon as one engineer needs input from six other engineers to complete a task, productivity takes a massive hit. Here's a thought experiment you can run to help find your own 10x improvement: Imagine your workload scales 10x overnight, and you absolutely must meet this increase without hiring more engineers or working additional hours. How do you do it? An out-of-the-box thought exercise like this can help you radically improve your approach.


Your project is unique, so why make it replicable?

While replicability isn’t as important as delivery in a modern environment, where software is often unique to the organisation, it is important to be able to prove effectiveness. At Catapult, we use an upskilling system that we call the Lighthouse Model; whereby we identify a team from the ground-up that can act as a model for the rest of the business and focus first on developing them as a group. By demonstrating the effectiveness of agile as a foundation on which to build software, a Lighthouse team creates a fertile environment, which removes blocks and gathers data to help develop buy-in across the board. All this works. In 2018, the Standish Group established that ‘Agile projects’ are twice as likely to succeed than waterfall projects. In the same study the company notes that 28 per cent of Waterfall projects fail, while only eleven per cent of agile projects meet the same fate. In this context, the metrics of success went beyond whether the project was on time and on budget and considered its outcomes and impact. They looked beyond the delivery against the plan to include the value delivered and customer satisfaction. In essence, they looked for the real meaning of success.


A New Definition of Reliability

The first thing you might assume is that reliability is synonymous with availability. After all, if a service is up 99% of the time, that means a user can rely on it 99% of the time, right? Obviously, this isn’t the whole story, but it’s worth exploring why. For starters, these simple system health metrics aren’t really so “simple.” Starting with just the Four Golden Signals, you’ll end up with the latency, resource saturation, error rate, and uptime of all your different services. For a complex product, this adds up to a whole lot of numbers. How do you combine and weigh all these metrics? Which are the important ones to watch and prioritize? Judging things like errors and availability can be difficult too. Gray failure, or when a service isn’t working completely but hasn’t totally failed either, can be hard to capture with quantitative metrics. When do you decide when a service is “available enough?” What about a situation where your service performs exactly as intended, but doesn’t align with your customers’ expectations? How do you capture these in your picture of system health? Clearly, there needs to be another layer to this definition of reliability!


Architecture Pitfalls: Don’t use your ORM entities for everything — embrace the SQL!

I suspect one of the greatest lies ever told in web application development is that if you use an ORM you can avoid writing and understanding SQL, “it’s just an implementation detail”. That might be true at first, but once you go beyond the basics that falls away quickly. ... It’s much better to let the database do this kind of filtering. After all, it’s what all of the clever folk who work on databases spend a lot of time and effort optimising. For most ORMs you have the option of writing analogues to SQL which can get you quite a long way. For example, JPA has JPQL and Hibernate has HQL. These let you build abstracted queries that should work on all databases that your ORM supports. The implication of this is that your team needs to embrace SQL and understand how to use it, rather than avoiding it by using application code instead. To dispel a common source of anxiety on this: you don’t need to be a SQL guru to get started and become familiar with what you will need for the vast majority of your implementation requirements. There are also excellent resources and books available, I will link some below. 


How To Build A Network Of Security Champions In Your Organization

An SCP enlists employees from all different disciplines across a company (HR, marketing, finance, etc.) for focused cybersecurity training and guidance. These security champions then become the contact point and voice for cybersecurity within their various departments or offices alongside their main role. They help to advise on, embed and reinforce good security practices with their colleagues. This makes security advice more relatable and accessible, avoiding the “us versus them” attitude that can sometimes exist between employees and traditional enterprise security teams. It’s easier for a colleague to explain a security risk or issue to a co-worker than it is for a security pro whom the co-worker has never met. The security champion’s role is a little like that of a department’s fire marshal. In the same way that the marshal doesn’t need to be a specialist in firefighting, the security champion doesn’t need to be an IT or infosec pro; they just need to know how their colleagues work, what the security risks are within their department or team and the common-sense steps to take to mitigate those risks. 


Companies warned to step up cyber security to become ‘insurable’

Carolina Klint, risk management leader for continental Europe for insurance broker Marsh, and one of the contributors to the report said that insurance companies were now coming out and saying that “cyber risk is systemic and uninsurable”. That means, in future, companies may not be able to find cover for risks such as ransomware, malware or hacking attacks. “It’s up to the insurance industry and to the capital markets whether or not they find the risk palatable,” she said in an interview with Computer Weekly, “but that is the direction it is moving in.” In recent days, cyber attacks have disrupted the international delivery services of the Royal Mail and infected IT systems at the Guardian newspaper with ransomware. The Global risks report rates cyber warfare and economic conflict as more serious threats to stability than the risks of military confrontation. “There is a real risk that cyber attacks may be targeted at critical infrastructure, health care and public institutions,” said Klint. “And that would have dramatic ramifications in terms of stability.”


6 Roles That Can Easily Transition to a Cybersecurity Team

Software engineers possess various technical skills, including coding and software development. They also understand the complexities involved in developing a secure application. This makes them well-suited for different types of cybersecurity tasks. ... They should also be familiar with various cyber threats, such as malware and phishing. Additionally, since software development is constantly evolving, software engineers should be prepared to keep up with the latest trends to remain competitive. ... Network architects possess a strong knowledge of networking technologies and are proficient in setting up secure networks. While not all security roles require a deep technical understanding, network architects are well-suited to design secure networks and implement protection measures. They can also review existing systems for vulnerabilities and recommend solutions to mitigate risks. ... They should also be familiar with emerging technologies and techniques related to cybersecurity, such as artificial intelligence (AI) and machine learning (ML). Another important skill for network architects is identifying and differentiating between legitimate and malicious traffic signals.


Getting started with data science and machine learning: what architects need to know

In almost every scientific field, the role of the data scientist is actually played by a physicist, chemist, psychologist, mathematician (for numerical experiments), or some other domain expert. They have a deep understanding of their field and pick up the necessary techniques to analyze their data. They have a set of questions they want to ask and know how to interpret the results of their models and experiments. With the increasing popularity of industrial data science and the rise of dedicated data science educational programs, a typical data scientist's training lacks domain-specific training. ... There are two opposing approaches. One is to know which tool to use, pick up a pre-implemented version online, and apply it to a problem. This is a very reasonable approach for most practical problems. The other is to deeply understand how and why something works. This approach takes much more time but offers the advantage of modifying or extending the tool to make it more powerful.


ZeroOps Helps Developers Manage Operational Complexity

The first thing to take into account when implementing ZeroOps for your business: You must consider everything that isn’t directly driving value. Who should be doing those tasks? You want your core staff to be focused on the business, so it’s worth considering a managed service provider as a partner. This can help provide your team with the skills and support they need, while allowing them to focus on their core competencies. The right tools can help your team be more productive than you ever imagined, without hiring new full-time employees. ... More agile, with less pressure and responsibility to handle “the little things” that we know aren’t so little. Imagine how your team members could shine when supported by experts to assist them so they can focus on providing value. Imagine being able to deliver projects much more quickly so delivery expectations actually aligned with what was realistic. ... Managed services can help make your team more productive and capitalize on their talent. When you struggle with a problem, it’s likely that your managed service provider has already solved it for others so you don’t have to reinvent the wheel.


Dark Web Monitoring For Law Firms: Is It Worthwhile?

One real value for a dark web scan is awareness. You should be able to obtain an initial dark web scan free of charge – without paying an ongoing monthly monitoring fee, which we certainly don’t recommend. The initial report will help identify if you have law firm employees that tend to reuse the same password across multiple sites. It may even identify sites you were not aware of so that you can immediately change the password. Use the dark web scan to educate employees at your next cybersecurity awareness training session. If you’re not teaching your employees about cybersecurity, at least annually, you are missing a very significant part of cyber resilience! A human element is involved in data breaches 82% of the time. Take control of your data and don’t hand it over to a monitoring service. You should be using a password manager and a unique password for each website or application you use. Put a freeze on your credit file at the three major credit bureaus. Freezing your credit file is free. 



Quote for the day:

"The test we must set for ourselves is not to march alone but to march in such a way that others will wish to join us." -- Hubert Humphrey