Showing posts with label RaaS. Show all posts
Showing posts with label RaaS. Show all posts

Daily Tech Digest - August 20, 2024

Humanoid robots are a bad idea

Humanoid robots that talk, perceive social and emotional cues, elicit empathy and trust, trigger psychological responses through eye contact and who trick us into the false belief that they have inner thoughts, intentions and even emotions create for humanity what I consider a real problem. Our response to humanoid robots is based on delusion. Machines — tools, really — are being deliberately designed to hack our human hardwiring and deceive us into treating them as something they’re not: people. In other words, the whole point of humanoid robots is to dupe the human mind, to mislead us into have the kind of connection with these machines formerly reserved exclusively for other human beings. Why are some robot makers so fixated on this outcome? Why isn’t the goal instead to create robots that are perfectly designed for their function, rather than perfectly designed to trick the human mind? Why isn’t there a movement to make sure robots do not elicit false emotions and beliefs. What’s the harm in preserving our intuition that a robot is just a machine, just a tool? Why try to route around that intuition with machines that trick our minds, coopting or hijacking our human empathy?


11 Irritating Data Quality Issues

Organizations need to put data quality first and AI second. Without dignifying this sequence, leaders fall into fear of missing out (FOMO) in attempts to grasp AI-driven cures to either competitive or budget pressures, and they jump straight into AI adoption before conducting any sort of honest self-assessment as to the health and readiness of their data estate, according to Ricardo Madan, senior vice president at global technology, business and talent solutions provider TEKsystems. “This phenomenon is not unlike the cloud migration craze of about seven years ago, when we saw many organizations jumping straight to cloud-native services, after hasty lifts-and-shifts, all prior to assessing or refactoring any of the target workloads. This sequential dysfunction results in poor downstream app performance since architectural flaws in the legacy on-prem state are repeated in the cloud,” says Madan in an email interview. “Fast forward to today, AI is a great ‘truth serum’ informing us of the quality, maturity, and stability of a given organization’s existing data estate -- but instead of facing unflattering truths, invest in holistic AI data readiness first, before AI tools."


CISOs urged to prepare now for post-quantum cryptography

Post-quantum algorithms often require larger key sizes and more computational resources compared to classical cryptographic algorithms, a challenge for embedded systems, in particular. During the transition period, systems will need to support both classical and post-quantum algorithms to support interoperability with legacy systems. Deidre Connolly, cryptography standardization research engineer at SandboxAQ, explained: “New cryptography generally takes time to deploy and get right, so we want to have enough lead time before quantum threats are here to have protection in place.” Connolly added: “Particularly for encrypted communications and storage, that material can be collected now and stored for a future date when a sufficient quantum attack is feasible, known as a ‘Store Now, Decrypt Later’ attack: upgrading our systems with quantum-resistant key establishment protects our present-day data against upcoming quantum attackers.” Standards bodies, hardware and software manufacturers, and ultimately businesses across the globe will have to implement new cryptography across all aspects of their computing systems. Work is already under way, with vendors such as BT, Google, and Cloudflare among the early adopters.


AI for application security: Balancing automation with human oversight

Security testing should be integrated throughout Application Delivery Pipelines, from design to deployment. Techniques such as automated vulnerability scanning, penetration testing, continuous monitoring, and many others are essential. By embedding compliance and risk assessment tasks into underlying change management processes, IT professionals can ensure that security testing is at the core of everything they do. Incorporating these strategies at the application component level ensures alignment with business needs to effectively prioritize results, identify attacks, and mitigate risks before they impact the network and infrastructure. ... To build a security-first mindset, organizations must embed security best practices into their culture and workflows. If new IT professionals coming into an organization are taught that security-first isn’t a buzzword, but instead the way the organization operates, it becomes company culture. Making security an integral part of the application delivery pipelines ensures that security policies and processes align with business goals. Education and communication are key—security teams must work closely with developers to ensure that security requirements are understood and valued. 


TSA biometrics program is evolving faster than critics’ perceptions

Privacy impact assessments (PIAs) are not only carried out for each new or changed process, but also published and enforced. The images of U.S. citizens captured by the TSA may be evaluated and used for testing, but they are deleted within 12 hours. Travelers have the choice of opting out of biometric identity verification, in which case they go through a manual ID check, just like decades ago. As happened previously with body scanners, TSA has adapted the signage it uses to notify the public about its use of biometrics. Airports where TSA uses biometrics now have signs that state in bold letters that participation is optional, explain how it works and include QR codes for additional information. The technology is also highly accurate, with tests showing 99.97% accurate verifications. In the cases which do not match, the traveler must then go through the same manual procedure used previously and also in cases where people opt out. TSA does not use biometrics to match people against mugshots from local police departments, for deportations or surveillance. In contrast, the proliferation of CCTV cameras observing people on their way to the airport and back home is not mentioned by Senator Merkley.


Blockchain: Redefining property transactions and ownership

Blockchain’s core strength lies in its ability to create a secure, immutable ledger of transactions. In the real estate context, this means that all details related to a property transaction— from the initial agreement to the final transfer of ownership—are recorded in a way that cannot be altered or tampered with. Blockchain technology empowers brokers to streamline transactions and enhance transparency, allowing them to focus on offering personalised insights and strategic advice. This shift enables brokers to provide a more efficient and cost-effective service while maintaining their advisory role in the real estate process. Another innovative application of blockchain in real estate is through smart contracts. These are digital contracts that automatically execute when certain conditions are met, ensuring that the terms of an agreement are fulfilled without the need for manual oversight. In real estate, smart contracts can be used to automate everything from title transfers to escrow arrangements. This automation not only speeds up the process but also reduces the chances of disputes, as all terms are clearly defined and executed by the technology itself. Beyond improving the efficiency of transactions, blockchain also has the potential to change how we think about property ownership. 


Agile Reinvented: A Look Into the Future

There’s no denying that agile is poised at a pivotal juncture, especially given the advent of AI. While no one knows how AI will influence agile in the long term, it is already shaping how agile teams are structured and how its members approach their work, including using AI tools to code or write user stories and jobs to be done. To remain relevant and impactful, agile must be responsive to the evolving needs of the workforce. Younger developers, in particular, seek more room for creativity. New approaches to agile team formation—including Team and Org Topologies or FaST, which relies on elements of dynamic reteaming instead of fixed team structures to tackle complex work—are emerging to create space for innovation. Since agile was built upon the values of putting people first and adapting to change, it can, and should, continue to empower teams to drive innovation within their organizations. This is the heart of modern agile: not blindly adhering to a set of rules but embracing and adapting its principles to your team’s unique circumstances. As agile continues to evolve, we can expect to see it applied in even more varied and innovative ways. For example, it already intersects with other methodologies like DevSecOps and Lean to form more comprehensive frameworks. 


Breaking Free from Ransomware: Securing Your CI/CD Against RaaS

By embracing a proactive DevSecOps mindset, we can repel RaaS attacks and safeguard our code. Here’s your toolkit: ... Don’t wait until deployment to tighten the screws. Integrate security throughout the software development life cycle (SDLC). Leverage software composition analysis (SCA) and software bill of materials (SBOM) creation, helping you scrutinize dependencies for vulnerabilities and maintain a transparent record of every software component in your pipeline. ... Your pipelines aren’t static entities; they are living ecosystems demanding constant Leveraging tools to implement continuous monitoring and logging of pipeline activity. Look for anomalies, suspicious behaviors and unauthorized access attempts. Think of it as having a cybersecurity hawk perpetually circling your pipelines, detecting threats before they take root. ... Minimize unnecessary access to your CI/CD environment. Enforce strict role-based access controls and least privilege Utilize access control tools to manage user roles and permissions tightly, ensuring only authorized users can interact with sensitive resources. Remember, the 2022 GitHub vulnerability exposed the dangers of lax access control in CI/CD environments.


Achieving cloudops excellence

Although there are no hard-and-fast rules regarding how much to spend on cloudops as a proportion of the cost of building or migrating applications, I have a few rules of thumb. Typically, enterprises should spend 30% to 40% of their total cloud computing budget on cloud operations and management. This covers monitoring, security, optimization, and ongoing management of cloud resources. ... Cloudops requires a new skill set. Continuous training and development programs that focus on operational best practices are vital. This transforms the IT workforce from traditional system administrators to cloud operations specialists who are adept at leveraging cloud environments’ nuances for efficiency. Beyond technical implementations, enterprise leaders must cultivate a culture that prioritizes operational readiness as much as innovation. The essential components are clear communication channels, cross-departmental collaboration, and well-defined roles. Organizational coherence enables firms to pivot and adapt swiftly to the changing tides of technology and market demands. It’s also crucial to measure success by deployment achievements and ongoing performance metrics. By setting clear operational KPIs from the outset, companies ensure that cloud environments are continuously aligned with business objectives. 


What high-performance IT teams look like today — and how to build one

“Today’s high-performing teams are hybrid, dynamic, and autonomous,” says Ross Meyercord, CEO of Propel Software. “CIOs need to create a clear vision and articulate and model the organization’s values to drive alignment and culture.” High-performance teams are self-organizing and want significant autonomy in prioritizing work, solving problems, and leveraging technology platforms. But most enterprises can’t operate like young startups with complete autonomy handed over to devops and data science teams. CIOs should articulate a technology vision that includes agile principles around self-organization and other non-negotiables around security, data governance, reporting, deployment readiness, and other compliance areas. ... High-performance teams are often involved in leading digital transformation initiatives where conflicts around priorities and solutions among team members and stakeholders can arise. These conflicts can turn into heated debates, and CIOs sometimes have to step in to help manage challenging people issues. “When a CIO observes misaligned goals or intra-IT conflict, they need to step in immediately to prevent organizational scar tissue from forming,” says Meyercord of Propel Software. 



Quote for the day:

"Don't necessarily and sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld

Daily Tech Digest - December 01, 2016

‘Cybersecurity has become a full-time job’ in healthcare

“Cybersecurity has become a full-time job,” Karl West, CISO of Intermountain Healthcare in Utah, said at AEHIX, an adjunct conference to the College of Healthcare Information Management Executives (CHIME) Fall CIO Summit this month in Phoenix. “There is a call for all of us to do better,” West said. He said that healthcare may only be at 30 percent to 50 percent of compliance with the required security regulations. Healthcare trails other industries in this area because it has spent so much money on transforming care with IT, while cybersecurity has ended up taking a back seat. At the annual U.S. News and World Report Healthcare of Tomorrow summit held earlier this month in Washington, D.C., Dr. Brian Jacobs, CMIO of Children’s National Medical Center, said that the hospital now dedicates 19 percent of its IT budget to security, Politico reported.


Destructive Hacks Strike Saudi Arabia, Posing Challenge to Trump

The ferocity of the attacks appear to have caught Saudi officials by surprise. Thousands of computers were destroyed at the headquarters of Saudi’s General Authority of Civil Aviation, erasing critical data and bringing operations there to a halt for several days, according to the people familiar with the investigation. There have been no reports of widespread transportation interruptions at the King Khalid International Airport in Riyadh or the other major airports. A spokesman for the aviation authority in Riyadh didn’t immediately respond to phone calls and e-mails requesting comment. The people familiar with the probe didn’t identify the other targets but one said they were all inside Saudi Arabia and included other government ministries in the kingdom, a country where information is highly controlled.


Most Organizations Not Adequately Prepared for Cyber Attacks: Marsh Cyber Handbook

While cyber breaches are one of the most likely and expensive threats to corporations, few companies can quantify how great their cyber risk exposure is, which prevents them from protecting themselves,” according to an article in the handbook titled, “Can You Put a Dollar Amount on Your Company’s Cyber Risk?” “Most managers rely on qualitative guidance from ‘heat maps’ that describe their vulnerability as ‘low’ or ‘high’ based on vague estimates that lump together frequent small losses and rare large losses,” adds the article.... The challenge is “to build a smart, well-designed, cyber risk model that’s able to analyze potential direct revenue, liability, and brand loss scenarios.


IoT to Get Security, Gateway Benchmarks

The working group for the gateway benchmark aims to deliver system-level benchmarks measuring overall throughput, latency and energy consumption for node-to-cloud communications. It will probably start with an industrial profile but has not yet specified what parameters it will measure. The group currently includes members from ARM, Dell, Flex and Intel and hopes to deliver a complete spec by next fall. It will use workloads generated across multiple physical ports to test multiple system components including the processor, physical and wireless interfaces and the operating system. “Today, without a standardized methodology, IoT gateway benchmarking is not realistic,” said Paul Teich, a principal analyst at Tirias Research and technical advisor to EEMBC.


MongoDB-as-a-Service on Pivotal Cloud Foundry

Mallika Iyer and Sam Weaver cover a brief overview of Pivotal Cloud Foundry and deep dive into running MongoDB as a managed service on this platform. The MongoDB service for Pivotal Cloud Foundry leverages the capabilities of Bosh 2.0 for on-demand-dynamic provisioning for services while maintaining an integration with MongoDB's Cloud Ops Manager, to provide the best of both: PCF and MongoDB. Mallika Iyer is a Principal Software Engineer at Pivotal, and spends a lot of time building Bosh-managed services on that run on Pivotal Cloud Foundry. She is a cloud architect and has an extensive background in NoSQL and Large-Scale Search. Sam Weaver is the Product Manager for Developer Experience at MongoDB, based in New York.


Data Breach Preparation and Response: Breaches are Certain, Impact is Not

It is a good practice to map out what you believe to be the Breach Breakdown in some sort of visual manner so that you can more clearly define your working hypothesis. You should also include a timeline of events that represents the chronological progression of the attack. This will be of particular interest to executives and general counsel as they prepare statements regarding what happened and when. In addition, you should also maintain a partner list of the impacted systems represented in the diagram. This list should include additional system details such as IP address, hostname, OS, system function (ie, webserver, database, workstation), and method of compromise.


The real effect Google's Pixel phone is having on Android

Features unique to the Pixel, such as the Google Assistant, the Pixel camera, and Daydream ... plus the smartphone's deeper app integration [and] increased prominence of Android Pay ... will ultimately lead to users spending more money on Android, according to the research note. Morgan Stanley's analysts also predict that these features could see the Pixel driving higher mobile search monetization for Google as advertisers will spend more to reach the consumers who spend the most on their mobiles. And there you have it. The Pixel is ultimately a vessel for Google to bring its own mobile vision directly to mainstream users. That benefits Google as a company, and it benefits us as consumers who carry Android phones.


Disaster recovery testing: A vital part of the DR plan

The cost of implementing disaster recovery is directly affected by the level of recovery required so, to contain costs, applications have to be prioritised against a set of metrics that determine recovery requirements. Recovery time objective (RTO) describes the amount of time a business application can tolerate being unavailable, usually measured in hours, minutes or seconds. We can imagine applications that deliver core banking for financial organisations have an RTO=0, whereas some back-end reporting functions may have an RTO of up to 4 hours. Recovery point objective (RPO) describes the previous point in time from which an application should be recovered. To use our banking example again, an RPO of zero will be expected for most applications – we don’t want to accept any lost transactions.


How is runtime as a service different from PaaS or IaaS?

RaaS differs from platform as a service (PaaS) because the environment is long-running in many PaaS systems, but they automatically scale the application up or down like RaaS does. Additionally, a traditional PaaS deployment limits developers to a specific application framework. With many RaaS concepts, developers essentially deploy code in a container that starts on-demand. The major thing to focus on when building an application using RaaS is minimal bootstrapping, so the runtime can start up, execute and close down quickly. Infrastructure as a service (IaaS) is a traditional cloud computing service where companies pay by the hour for compute environments, whether they're actively used or idle. While it's the least efficient form of cloud computing, IaaS is still the go-to for most companies, primarily because it's the most similar to traditional programming


The Hardest Part About Microservices

The journey to microservices is just that: a journey. It will be different for each company. There are no hard and fast rules, only tradeoffs. Copying what works for one company just because it appears to work at this one instant is an attempt to skip the process and journey and will not work. And the point to make here is that your enterprise is not Netflix. In fact, I’d argue that for however complex the domain is at Netflix, it’s not as complicated as it is at your legacy enterprise. Searching for and showing movies, posting tweets, updating a LinkedIn profile, etc., are all a lot simpler than your insurance claims processing systems. These internet companies went to microservices because of speed to market, sheer volume, and scale



Quote for the day:


"I think we ought to read only the kind of books that wound and stab us. If the book we are reading doesn't wake us up with a blow on the head, what are we reading it for?" -- Franz Kafka,