Showing posts with label estimation. Show all posts
Showing posts with label estimation. Show all posts

Daily Tech Digest - October 19, 2023

Regulations are still necessary to compel adoption of cybersecurity measures

Ultimately, though, there should be clear mandates to push the industry toward clear outcomes, Rivas said. Such requirements, for example, could include a proper patch management strategy and robust monitoring system, Sondhi said. These should be accompanied by roadmaps for rollout, so market players would be given the necessary timelines to ensure compliance, he added. Acknowledging there will inevitably be pushback over concerns such mandates have on cost and time-to-market, he said regulations need not be overly complex. They also can point to accompanying standards bodies tasked to provide more details and update the adoption of best practices when necessary. This will free up governments from having to keep up with market changes and to instead focus on mandating high-level requirements, he noted. Enforcement also is a good starting point when the road toward cyber resilience may be long and fraught with complexities. Organizations in operational technology (OT) sectors, in particular, have ecosystems that have to be managed differently from IT infrastructures, Sondhi said.


Beyond The 10X Software Engineer: Focusing On The Bigger Picture

Match team responsibilities with the load they can handle. You can do with additional training, a good choice of underlying technologies, pair programming, reshuffling responsibilities among teams and strategic hiring for the critical skills still missing. For new team members, focus on tasks doable first within a four-hour time slot and then in two to three days, so they can experience repeatable success right away. With time, you can extend the average task timeline to two weeks. Make sure there's a variety of tasks of similar complexity. For example, you don't want to corner a software developer into only fixing bugs. Mix things up to challenge team members with creative tasks like minor new functionalities. Eliminate excessive bureaucracy and low-value business processes. Boring or superfluous administrative tasks are a real motivation drain, so reviewing them and removing those with low value have a visible impact. Map team competencies to task complexity. This can be done formally or informally and usually narrows down the list of competencies for each team.


The Purpose of Estimation in Scrum: Sizing for Sprint Success

In response to the limitations of hours-based estimation, Scrum Teams are turning to alternative methods such as relative estimation (using points). Alternatively, teams are increasingly using flow metrics as a simpler and often more accurate way to forecast value delivery. Relative estimation is a technique used to estimate the size, complexity, and effort required for each Product Backlog item. To use relative estimation, the Scrum Team first selects a point system. Common point systems include Fibonacci or a simple scale from 1 to five. (See our recent article How to create a point system for estimating in Scrum. Once the team has selected a point system, they create an agreement which describes which type of items to classify as a 1, 2, and so on. It is possible for the team just to use its judgment, documenting that information in their team agreements. Then, when the team needs to estimate new work, they simply compare the new work to similar work done in the past, and assign the appropriate number. 


What Enterprises Need to Know About ChatGPT and Cybersecurity

Receiving the most valuable information from ChatGPT requires asking the correct questions and expanding on the initial inquiry to obtain desired results and a deeper understanding. Hackers are learning that they cannot ask ChatGPT a directly malicious question, or they will receive a response such as, “I do not create malware.” Instead, they ask it to pretend that ChatGPT is an AI model that can execute a particular script. Bad actors continue to exploit and socially engineer the process of installing malware or getting people to relinquish credentials for unauthorized data system access. AI tools are making it easier for cybercriminals to harm people. ... One noteworthy point is that the ability to use AI to manipulate humans through social engineering is becoming increasingly controllable. However, ChatGPT is not a Rosetta Stone-like translator for hackers. Although both AI-generated scripts and social media platform scripts are made by machines, their complexity, reliability and security can differ significantly.


Weathering the Storm: A Guide to Preserving Business Continuity

Organizations that are most vulnerable to disruption tend to be those that rely on legacy systems that have a single point of communications failure. The additional risk exposure that accompanies these older networks may well justify shifting to a cloud-based network (such as SD-WAN, a software-defined wide area network) that provides the flexibility to bounce between broadband and ethernet in real time to preserve bandwidth and connectivity. Similarly, it may be worth considering moving to a unified communications platform, which is designed to maintain multichannel communications for customers and employees. ... Based on the risk assessment, create a formal, highly detailed plan specifying how your organization will manage various crisis scenarios, the tools it will use to keep the business running, and how, and by whom, information will be communicated internally and externally. The plan also should identify critical on-premises hardware and brick-and-mortar IT infrastructure (such as data centers) that must be protected, and how they will be protected. Organizations with a continuity plan already in place should revisit it at least annually and update it as needed.


Phishing emails are more believable than ever. Here’s what to do about it

Because most ransomware is delivered through phishing, employee education is essential to protecting your organization from these threats. That said, there’s no single “one size fits all” education program--these training efforts should be tailored to your enterprise's unique needs. Below are several types of services and/or programs that are designed to help users understand and detect phishing and other cyber threats, all of which can serve as a great starting point for building a comprehensive employee security awareness program. ... Delivering simulated phishing emails to your organization’s employees allows them to practice identifying malicious communications so that they know what to do when a threat actor strikes. The FortiPhish Phishing Simulation Service uses real-world simulations to help organizations test user awareness and vigilance to phishing threats and to train users on what steps to take when they suspect they might be a target of a phishing attack. ... As with the introduction of any new technology, cybercriminals will continually find ways to use these tools for nefarious purposes. 


9 Steps to Platform Engineering Hell

The platform team still works with a DevOps mindset and continues to write pipelines and automation for individual product teams. They get too many requests from developers and don’t have the time or resources to zoom out and come up with a long-term strategy to build a scalable IDP and ship it as a product to the rest of the engineering organization. ... More platform engineers are finally hired on the team, all very experienced, with years working in operations. They come together and think hard about the biggest Ops issues they experienced during their careers. They start designing a platform to fix all those annoying issues that bugged them for years, but developers will never use this platform. It doesn’t solve their problems; it only solves Ops problems. ... Because you’re a large enterprise with inefficient cross-unit communication, mid-management starts several platform engineering initiatives without aligning with each other. Leadership doesn’t intervene, efforts double, communication is not facilitated and gets progressively worse. You end up with five platforms for five teams, most of which don’t work at all.


The must-knows about low-code/no-code platforms

Low-code/no-code platforms inadvertently make it easy to bypass the procedural steps in production that safeguard code. This issue can be exacerbated by a workflow’s lack of developers with concrete knowledge of coding and security, as these individuals would be most inclined to raise flags. From data breaches to compliance issues, increased speed can come at a great cost for enterprises that don’t take the necessary steps to scale with confidence. ... Maintaining a strong team of professional developers and guardrail mechanisms can prevent a Wild West scenario from emerging, where the desire to play fast and loose creates security vulnerabilities, mounting technical debt from a lack of management and oversight happening at the developer level, and inconsistent development practices that spur liabilities, software bugs, and compliance headaches. AI-powered tools can offset complications caused by acceleration and automation through code governance and predictive intelligence mechanisms however, enterprises often find themselves with a piecemealed portfolio of AI tools that create bottlenecks in their development and delivery processes or lack proper security tools to ensure the quality of code.


What It Takes To Architect A Culture Of Cybersecurity

Just because organizations impart mandatory compliance and security awareness training to their employees does not mean employees will act securely. This is because of something called the knowledge-behavior gap. Having knowledge does not mean that people behave in a certain way. For them to transition from behavior to knowledge, they also need “acceptance” and “intent.” Think of it like the speed limit sign we consciously choose to ignore. We know the sign’s there, we know it’s against the law to exceed it, we know that speeding kills, and yet we choose to turn a blind eye. Since most organizations do not actively manage and cultivate their security culture, they assume that it does not exist in their organization. The reality is that every organization, regardless of size, has a culture. The way in which organizations and leadership teams treat, value, and manage security, influences and builds its security culture. Unfortunately, most organizations do not track the security-related aspects of their culture in its early stages and eventually, it ends up spiraling out of control and manifesting into something the organization may have difficulty reversing.
A semantic layer allows business users with little or no technical skills to access and consume data without needing to understand the underlying technical complexities. It makes data more accessible and understandable to non-technical users, enabling them to easily query, analyze, and make informed decisions based on the data. ... Integrating data into a semantic layer from multiple sources -- each with its own structure, format, and levels of detail -- can be a complex undertaking. The process of harmonizing these sources demands time and meticulous attention to detail. Creating intricate business views using precise calculations within the semantic layer presents yet another challenge. Applying complex formulas, conditional rules, and computations across multiple data sources is a grueling task. Mapping business metrics with consistency in calculations and hierarchies across diverse BI tools can be highly complicated as each tool handles it in a different manner. ... You’ll need a scalable and efficient semantic layer that is adept at collaborating with multiple BI tools. 



Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous

Daily Tech Digest - June 15, 2022

Software Engineering - The Soft Parts

Transferable skills are those you can take with you from project to project. Let's talk about them in relation to the fundamentals. The fundamentals are the foundation of any software engineering career. There are two layers to them - macro and micro. The macro layer is the core of software engineering and the micro layer is the implementation (e.g. the tech stack, libraries, frameworks, etc.). At a macro level, you learn programming concepts that are largely transferable regardless of language. The syntax may differ, but the core ideas are still the same. This can include things like: data-structures (arrays, objects, modules, hashes), algorithms (searching, sorting), architecture (design patterns, state management) and even performance optimizations. These are concepts you'll use so frequently that knowing them backwards can have a lot of value. At a micro level, you learn the implementation of those concepts. This can include things like: the language you use (JavaScript, Python, Ruby, etc), the frameworks you use (e.g. React, Angular, Vue etc), the backend you use (e.g. Django, Rails, etc), and the tech stack you use (e.g. Google App Engine, Google Cloud Platform, etc).


Why young tech workers leave — and what you can do to keep them

When employees seek a raise, what they’re really doing is shopping around and comparing offers from other companies, according to Sethi. And when it comes to salaries, companies must keep up with inflation, which is running at about 8% a year. But retaining employees requires more than just pay. Workers also want more support in translating environmental, social, and governance (ESG) considerations to their work. “Fulfilling work and the opportunity to be one’s authentic self at work also matter to employees who are considering a job change," Sethi said. "Pay is table stakes, but I also want my job to be meaningful and fulfilling, and I want to work at a place where I can be myself." Employees also want workplace flexibility. That, and human-centric work policies, can reduce attrition and increase performance. In fact, Gartner found that 65% of IT employees said that whether they can work flexibly affects their decision to stay at an organization.


A neuromorphic computing architecture that can run some deep neural networks more efficiently

Researchers at Graz University of Technology and Intel have recently demonstrated the huge potential of neuromorphic computing hardware for running DNNs in an experimental setting. Their paper, published in Nature Machine Intelligence and funded by the Human Brain Project (HBP), shows that neuromorphic computing hardware could run large DNNs 4 to 16 times more efficiently than conventional (i.e., non-brain inspired) computing hardware. "We have shown that a large class of DNNs, those that process temporally extended inputs such as for example sentences, can be implemented substantially more energy-efficiently if one solves the same problems on neuromorphic hardware with brain-inspired neurons and neural network architectures," Wolfgang Maass, one of the researchers who carried out the study, told TechXplore. "Furthermore, the DNNs that we considered are critical for higher level cognitive function, such as finding relations between sentences in a story and answering questions about its content." In their tests, Maass and his colleagues evaluated the energy-efficiency of a large neural network running on a neuromorphic computing chip created by Intel.


Why Your Database Needs a Machine Learning Brain

By keeping the ML at the database level, you’re able to eliminate several of the most time-consuming steps — and in doing so, ensure sensitive data can be analyzed within the governance model of the database. At the same time, you’re able to reduce the timeline of the project and cut points of potential failure. Furthermore, by placing ML at the data layer, it can be used for experimentation and simple hypothesis testing without it becoming a mini-project that requires time and resources to be signed off. This means you can try things on the fly, and not only increase the amount of insight but the agility of your business planning. By integrating the ML models as virtual database tables, alongside common BI tools, even large datasets can be queried with simple SQL statements. This technology incorporates a predictive layer into the database, allowing anyone trained in SQL to solve even complex problems related to time series, regression or classification models. In essence, this approach "democratizes" access to predictive data-driven experiences.


Understanding Low-code Development

If you are interested in getting started with low-code development, you will need a few things. First, you will need a low-code development platform. There are many options for you to select the right platform for you. You should analyze your requirements and explore all such options before choosing one. Several different options are available, so you should explore them to find one that meets your requirements. Once you have chosen a platform, you will need to learn how to use it. This may require some training or reading documentation. Finally, you will need some ideas for what you want to build. You are now ready to start low-code development. ... Here are some of the downsides of using Low-Code platforms for software development: Lack of Customization – Even though the pre-built modules of the low-code platforms are incredibly handy to work with, you can’t customize your application with them. You can customize low-code platforms but only to a limited extent. In most cases, low-code components are generic and if you want to customize your app you should invest time and effort in custom app development. 


Authentic Allyship and Intentional Leadership

Enterprises and leaders have to be intentional about their allyship. It has to be authentic allyship, not just surface allyship. I mention intentional allyship because a lot of times people think they’re an ally, and support diversity hires, but they’re just checking a box. We want intentional and authentic allyship. We need you to understand it goes beyond the person you’re helping. You’re helping the generation, not just one person. You think you’re only affecting the employee right in front of you, but that individual has a family and the next generation after them. You’re not just checking a box; you’re impacting destiny. When you’re an intentional ally, you think beyond the person in front of you, beyond the job application, beyond what you see. It’s not about you but what you’re doing for that person and that person’s generation to come. You need to really think about the step you’ll take when it comes to allyship. Make an impact – a lot of times we talk but don’t implement. Activate, implement, follow up. Don’t just implement and leave them there. Follow up – ask them how they’re doing, and if they know anyone else you can bring in. 


Software engineering estimates are garbage

Garbage estimates don’t account for the humanity of the people doing the work. Worse, they imply that only the system and its processes matter. This ends up forcing bad behaviors that lead to inferior engineering, loss of talent, and ultimately less valuable solutions. Such estimates are the measuring stick of a dysfunctional culture that assumes engineers will only produce if they’re compelled to do so—that they don’t care about their work or the people they serve. Falling behind the estimate’s promises? Forget about your family, friends, happiness, or health. It’s time to hustle and grind. Can’t craft a quality solution in the time you’ve been allotted? Hack a quick fix so you can close out the ticket. Solving the downstream issues you’ll create is someone else’s problem. Who needs automated tests anyway? Inspired with a new idea of how this software could be built better than originally specified? Keep it to yourself so you don’t mess up the timeline. Bludgeon people with the estimate enough, and they’ll soon learn to game the system.


Return to the office or else? Why bosses' ultimatums are missing the point

Employers who insist their staff return to the office full time are heading into increasingly dangerous territory. Skilled professionals, tech workers included, have so many opportunities available to them right now that it's difficult to see why they would sacrifice job satisfaction for their bosses. The outlook has never been better for knowledge workers – and indeed, workers more generally – across all industries. Not only are employers paying more to get the skills they need, but the breadth of flexible-working options for employees fed up with office life continues to grow. People aren't just working from home – they're working from wherever they choose, and whenever they choose. At the same time, significant momentum is gathering behind the introduction of a four-day work week, which could push the dynamic even further in favour of worker wellbeing while benefitting employers too. Companies who offer 100% pay for 80% of the hours will have a seriously powerful bargaining chip to play in the war for talent, and no company – regardless of their brand, product or credentials – will be untouchable.


UK needs to upskill to achieve quantum advantage

Discussing the pilot, Stephen Till, fellow at the Defence Science and Technology Laboratory (Dstl), an executive agency of the MoD, said: “This work with ORCA Computing is a milestone moment for the MoD. Accessing our own quantum computing hardware will not only accelerate our understanding of quantum computing, but the computer’s room-temperature operation will also give us the flexibility to use it in different locations for different requirements. “We expect the ORCA system to provide significantly improved latency – the speed at which we can read and write to the quantum computer. This is important for hybrid algorithms, which require multiple handovers between quantum and classical systems.” Piers Clinton-Tarestad, a partner in EY’s technology risk practice, said there is a general consensus that quantum computing will start becoming a reality in 2030. But pilot projects, such as the one being conducted at the MoD, and proof-of-concept applications can help business leaders to understand where quantum technology can be applied. 


Using automation to improve employee experience

The possibilities to improve the employee experience through automation and integration are endless. If you want to pilot something in your organization, poll your employees about what would be the most impactful. Where are they seeing sludge that drags down morale and slows business velocity? You and your IT team can plot each idea on an impact and effort prioritization matrix. Some suggestions may be easier to implement than you think, as many cloud services are already API-enabled, making automation straightforward. Once your team implements an initial valuable and visible integration, more employee lightbulbs will go off, identifying additional ideas for automation and integration for your prioritization backlog. And don’t forget about the ROI calculators in your automation tooling, as they will help objectively refine your prioritization by analyzing your planned and actual savings. Not only will your employees benefit directly from the automation, but they will also feel heard when they see their ideas come to life.



Quote for the day:

"Uncertainty is a permanent part of the leadership landscape. It never goes away." -- Andy Stanley

Daily Tech Digest - April 25, 2022

How to avoid compliance leader burnout

Just as a CISO will be held responsible for a security breach, even if the incident was unforeseeable, a compliance leader is considered responsible for all aspects of compliance: getting the appropriate certifications and reports, making sure the company passes its audits, etc. But if traditional methods of compliance are used, the compliance leader has no actual oversight on whether those controls are running. For example, the compliance team may set up controls over user access, but if one control owner forgets to run their control, the resulting failure will likely be blamed on the compliance leader. ... Data-oriented compliance that automatically pulls data from primary sources can sift through a vast volume of data and give an early signal if it senses a problem that needs to be looked at by a security person or engineer. This makes it less likely that a compliance leader will be blindsided by a long-running failure to implement a control. When a control is built into processes that a department is already running, it’s less likely to be overlooked by that department—since the control is part of a process that’s operationally important to the company.


Simplify Cloud Deployment Through Teamwork, Strategy

Liu suggests that when striving for simplification, IT organizations should recognize that simplification of architectures is complex and can be disruptive. That means it’s important to identify the most opportune time that works for the whole organization. “When simplifying, don’t just think about components like network switches or storage,” she says. “If you focus on moving or simplifying one component, your simplification can invite a lot more complexity. Think about simplifying whole infrastructure solutions. Align at the solution- or service-level first.” Stuhlmuller advises that enterprise cloud teams should educate themselves on how networking is done, not only in their primary cloud, but in all public clouds. This allows them to develop a multi-cloud network architecture that will keep them from having to re-architect when – inevitably -- the day comes when the business requires support for a second or third public cloud provider. “Cloud teams supporting enterprise scale businesses discover that building with basic constructs quickly increases the complexity and requires resource intensive manual configuration,” he says.


Most Email Security Approaches Fail to Block Common Threats

Digging into where email defense breaks down, the firms found that, surprisingly, use of email client plug-ins for users to report suspicious messages continues to increase. Half of organizations are now using an automated email client plug-in for users to report suspicious email messages for analysis by trained security professionals, up from 37 percent in a 2019 survey. Security operations center analysts, email administrators, and an email security vendor or service provider are the groups most commonly handling these reports, although 78 percent of organizations notify two or more groups. Also, user training on email threats is now offered in most companies, the survey found: More than 99 percent of organizations offer training at least annually, and one in seven organizations offer email security training monthly or more frequently. “Training more frequently reduces a range of threat markers Among organizations offering training every 90 days or more frequently, the likelihood of employees falling for a phishing, BEC or ransomware threat is less than organizations only training once or twice a year,” according to the report.


Why private edge networks are gaining popularity

For edge computing to gain large-scale adoption across enterprises, APIs need to provide an abstraction layer that alleviates the intensive work of having developers write code to communicate with each system in a tech stack. Abstraction layers save developers’ time and streamline new app development. Alef’s approach looks at how they can capitalize on stable APIs to protect developers from dealing with complex tech stacks in getting work done. Edge device processors are getting more intelligent. The rapid gains in chip processor architectures make it possible to complete data capture, analytics and aggregated at the endpoint first before sending the result to cloud databases. In addition, endpoint devices’ growing intelligence makes it possible to offload more tasks, freeing up network latency in the process. ... All businesses need real-time data to grow. Small gains in visibility and control across an enterprise can deliver large cost savings and revenue gains. It’s because real-time data is very good at helping to identify gaps in cost, customer, revenue and service processes.


Deep Science: AI simulates economies and predicts which startups receive funding

Applying AI to due diligence is nothing new. Correlation Ventures, EQT Ventures and Signalfire are among the firms currently using algorithms to inform their investments. Gartner predicts that 75% of VCs will use AI to make investment decisions by 2025, up from less than 5% today. But while some see the value in the technology, dangers lurk beneath the surface. In 2020, Harvard Business Review (HBR) found that an investment algorithm outperformed novice investors but exhibited biases, for example frequently selecting white and male entrepreneurs. HBR noted that this reflects the real world, highlighting AI’s tendency to amplify existing prejudices. In more encouraging news, scientists at MIT, alongside researchers at Cornell and Microsoft, claim to have developed a computer vision algorithm — STEGO — that can identify images down to the individual pixel. While this might not sound significant, it’s a vast improvement over the conventional method of “teaching” an algorithm to spot and classify objects in pictures and videos.


Stack Overflow Exec Shares Lessons from a Self-Taught Coder

As a self-taught developer, Chan describes that life as an entry-level software engineer as “a really big surprise and shock.” Especially given his past experiences in the world of programmer job interviews. He was baffled by his previous experiences interviewing with large companies, finding himself “failing miserably,” he told the podcast audience. Tech interviews, he said, were “where it’s like, ‘I don’t even know what a red-black tree is, so please don’t ask me more interview questions about that kind of thing!'” By contrast, he’d known of Stack Overflow for years, and considered it the home of “some of the best engineers that I could possibly think of.” ... Chan recalled learning what all new managers learn: while you may have been good at your old position, “once you become a manager, the skillset is completely different.” Or, in his case, “You’re no longer working with computers and with code anymore. You’re working with people, right?” There were more conversations, and listening to people — but also a shift in thought. “This is not about code so much anymore,” he said. 


Founders’ Guide To Embedding Corporate Governance In Your Startup

It would do good for founders to have some role models when it comes to governance and read about the practices and philosophies deployed by them. However, they may have to look beyond the startup universe for that because good governance is usually a sustained phenomenon. Companies that have been in business for decades could only qualify for the same. In my view, the Tata Group in general but specifically under the stewardship of JRD Tata has been the epitome of good governance. Some leading IT services companies like Infosys could also be studied. One does not have to look far and toward the West for such role models. Founders will do well to remember that getting an up round (after passing through diligence) is not a validation that they are doing everything right. Many times, investments happen due to prevailing market sentiment and liquidity. This happens in spaces that are hot and market tailwinds compel investors to close transactions faster. However, such times don’t last forever. Often, when a fastidious investor comes in to write a big cheque, such transgressions come to light.


Improving Your Estimation Skills by Playing a Planning Game

When we look at a large, complex task and estimate how long it will take us to complete, we mentally break down the large task into smaller tasks. We then construct a mental story of how we will complete each smaller task. We identify the sequential relationship between tasks, their interconnectedness, and their prerequisites. We then integrate them into a connected narrative of how we will complete the large task. All of these activities are good, and indeed essential for completing any large task. However, by constructing this mental story, we slip out of estimation mode and into planning mode. This means that we focus upon the how-to’s, rather than thinking back to past experiences, of potential impediments and how they may extend the task duration. Planning is a bit like software development, whilst estimation is a bit like software testing. In development, we are trying to get something to work. So, if our initial approach is unsuccessful, we modify it or try something else. Once we have got it to work, we are generally satisfied and move onto solving the next problem.


How to be a smart contrarian in IT

Start with the end user or the most important stakeholders: Do they find the end results intriguing? Have you built a proof-of-concept solution that tests your hypotheses? Can they get some value and provide you with quality feedback from a minimal viable product (MVP)? Don’t over-engineer a solution to a problem that nobody cares about. Let your customers lead you to what matters and do just enough engineering from there. You’ll still need to add standard enterprise features such as security, user experience, and scale, but the goal is to add them to a product your client wants and values. ... Before you try to solve a problem, find out if anyone on your team or at your company has already solved that problem or has experience with it. Explore wikis and forums to see if solutions have been documented privately or publicly. Too often, we fail to ask questions because we don’t want to appear uninformed or unintelligent. Keep in mind that most people enjoy being asked for advice and would welcome the opportunity to answer a question, especially early in the process when they can help you save time and effort. 


Get ready for your evil twin

Accurately replicating the look and sound of a person in the metaverse is often referred to as creating a “digital twin.” Earlier this year, Jensen Haung, the CEO of NVIDIA gave a keynote address using a cartoonish digital twin. He stated that the fidelity will rapidly advance in the coming years as well as the ability for AI engines to autonomously control your avatar so you can be in multiple places at once. Yes, digital twins are coming. Which is why we need to prepare for what I call “evil twins” – accurate virtual replicas of the look, sound, and mannerisms of you (or people you know and trust) that are used against you for fraudulent purposes. This form of identity theft will happen in the metaverse, as it’s a straightforward amalgamation of current technologies developed for deep-fakes, voice emulation, digital-twinning, and AI driven avatars. And the swindlers may get quite elaborate. According to Bell, bad actors could lure you into a fake virtual bank, complete with a fraudulent teller that asks you for your information. Or fraudsters bent on corporate espionage could invite you into a fake meeting in a conference room that looks just like the virtual conference room you always use.



Quote for the day:

"The signs of outstanding leadership are found among the followers." -- Max DePree

Daily Tech Digest - September 25, 2021

Top 5 Objections to Scrum (and Why Those Objections are Wrong)

Many software development teams are under pressure to deliver work quickly because other teams have deadlines they need to meet. A common objection to Agile is that teams feel that when they have a schedule to meet, a traditional waterfall method is the only way to go. Nothing could be further from the truth. Not only can Scrum work in these situations, but in my experience, it increases the probability of meeting challenging deadlines. Scrum works well with deadlines because it’s based on empiricism, lean thinking, and an iterative approach to product delivery. In a nutshell, empiricism is making decisions based on what is known. In practice, this means that rather than making all of the critical decisions about an initiative upfront, when the least is known, Agile initiatives practice just-in-time decision-making by planning smaller batches of work more often. Lean thinking means eliminating waste to focus only on the essentials, and iterative delivery involves delivering a usable product frequently.


The Future Is Data Center as a Service

The fact is that whether we realize it or not, we’ve gotten used to thinking of the data center as a fluid thing, particularly if we use cluster paradigms such as Kubernetes. We think of pods like tiny individual computers running individual applications, and we start them up and tear them down at will. We create applications using multicloud and hybrid cloud architectures to take advantage of the best situation for each workload. Edge computing has pushed this analogy even further, as we literally spin up additional nodes on demand, with the network adjusting to the new topology. Rightfully so; with the speed of innovation, we need to be able to tear down a data center that is compromised or bring up a new one to replace it, or to enhance it, at a moment’s notice. In a way, that’s what we’ve been doing with public cloud providers: instantiating “hardware” when we need it and tearing it down when we don’t. We’ve been doing this on the cloud providers’ terms, with each public cloud racing to lock in as many companies and workloads as possible with a race to the bottom on cost so they can control the conversation.


DevSecOps: 5 ways to learn more

There’s a clear connection between DevSecOps culture and practices and the open source community, a relationship that Anchore technical marketing manager Will Kelly recently explored in an opensource.com article, “DevSecOps: An open source story.” As you build your knowledge, getting involved in a DevSecOps-relevant project is another opportunity to expand and extend your experience. That could range from something as simple as joining a project’s community group or Slack to ask questions about a particular tool, to taking on a larger role as a contributor at some point. The threat modeling tool OWASP Threat Dragon, for example, welcomes new contributors via its Github and website, including testers and coders.  ... The value of various technical certifications is a subject of ongoing – or at least on-again, off-again – debate in the InfoSec community. But IT certifications, in general, remain a solid complementary career development component. Considering a DevSecOps-focused certification track is in itself a learning opportunity since any credential worth more than a passing glance should require some homework to attain.


How Medical Companies are Innovating Through Agile Practices

Within regulatory constraints, there is plenty of room for successful use of Agile and Lean principles, despite the lingering doubts of some in quality assurance or regulatory affairs. Agile teams in other industries have demonstrated that they can develop without any compromise to quality. Additional documentation is necessary in regulated work, but most of it can be automated and generated incrementally, which is a well-established Agile practice. Medical product companies are choosing multiple practices, from both Agile and Lean. Change leaders within the companies are combining those ideas with their own deep knowledge of their organization’s patterns and people. They’re finding creative ways to achieve business goals previously out of reach with traditional “big design up front” practices. ... Our goal here is to show how the same core principles in Agile and Lean played out in very different day-to-day actions at the companies we profiled, and how they drove significant business goals for each company.


The Importance of Developer Velocity and Engineering Processes

At its core, an organization is nothing more than a collection of moving parts. A combination of people and resources moving towards a common goal. Delivering on your objectives requires alignment at the highest levels - something that becomes increasingly difficult as companies scale. Growth increases team sizes creating more dependencies and communication channels within an organization. Collaboration and productivity issues can quickly arise in a fast-scaling environment. It has been observed that adding members to a team drives inefficiency with negligible benefits to team efficacy. This may sound counterintuitive but is a result of the creation of additional communication lines, which increases the chance of organizational misalignment. The addition of communication lines brought on by organization growth also increases the risk of issues related to transparency as teams can be unintentionally left “in the dark.” This effect is compounded if decision making is done on the fly, especially if multiple people are making decisions independent of each other.


Tired of AI? Let’s talk about CI.

Architectures become increasingly complex with each neuron. I suggest looking into how many parameters GPT-4 has ;). Now, you can imagine how many different architectures you can have with the infinite number of configurations. Of course, hardware limits our architecture size, but NVIDIA (and others) are scaling the hardware at an impressive pace. So far, we’ve only examined the computations that occur inside the network with established weights. Finding suitable weights is a difficult task, but luckily math tricks exist to optimize them. If you’re interested in the details, I encourage you to look up backpropagation. Backpropagation exploits the chain rule (from calculus) to optimize the weights. For the sake of this post, it’s not essential to understand how the learning of the weights, but it’s necessary to know backpropagation does it very well. But, it’s not without its caveats. As NNs learn, they optimize all of the weights relative to the data. However, the weights must first be defined — they must have some value. This begs the question, where do we start?


How do databases support AI algorithms?

Oracle has integrated AI routines into their databases in a number of ways, and the company offers a broad set of options in almost every corner of its stack. At the lowest levels, some developers, for instance, are running machine learning algorithms in the Python interpreter that’s built into Oracle’s database. There are also more integrated options like Oracle’s Machine Learning for R, a version that uses R to analyze data stored in Oracle’s databases. Many of the services are incorporated at higher levels — for example, as features for analysis in the data science tools or analytics. IBM also has a number of AI tools that are integrated with their various databases, and the company sometimes calls Db2 “the AI database.” At the lowest level, the database includes functions in its version of SQL to tackle common parts of building AI models, like linear regression. These can be threaded together into customized stored procedures for training. Many IBM AI tools, such as Watson Studio, are designed to connect directly to the database to speed model construction.


A Comprehensive Guide to Maximum Likelihood Estimation and Bayesian Estimation

An estimation function is a function that helps in estimating the parameters of any statistical model based on data that has random values. The estimation is a process of extracting parameters from the observation that are randomly distributed. In this article, we are going to have an overview of the two estimation functions – Maximum Likelihood Estimation and Bayesian Estimation. Before having an understanding of these two, we will try to understand the probability distribution on which both of these estimation functions are dependent. The major points to be discussed in this article are listed below. ... As the name suggests in statistics it is a method for estimating the parameters of an assumed probability distribution. Where the likelihood function measures the goodness of fit of a statistical model on data for given values of parameters. The estimation of parameters is done by maximizing the likelihood function so that the data we are using under the model can be more probable for the model.


DORA explorers see pandemic boost in numbers of 'elite' DevOps performers

DORA has now added a fifth metric, reliability, defined as the degree to which one "can keep promises and assertions about the software they operate." This is harder to measure, but nevertheless the research on which the report is based asked tech workers to self-assess their reliability. There was a correlation between reliability and the other performance metrics. According to the report, 26 per cent of those polled put themselves into the elite category, compared to 20 per cent in 2019, and seven per cent in 2018. Are higher performing techies more likely to respond to the survey? That seems likely, and self-assessment is also a flawed approach; but nevertheless it is an encouraging trend, presuming agreement that these metrics and survey methodology are reasonable. Much of the report reiterates conventional DevOps wisdom. NIST's characteristics of cloud computing [PDF] are found to be important. "What really matters is how teams implement their cloud services, not just that they are using cloud technologies," the researchers said, including things like on-demand self service for cloud resources.


Why Our Agile Journey Led Us to Ditch the Relational Database

Despite our developers having zero prior experience with MongoDB prior to our first release, they still were able to ship to production in eight weeks while eliminating more than 600 lines of code, coming in under time and budget. Pretty good, right? Additionally, the feedback provided was that the document data model helped eliminate the tedious work of data mapping and modeling they were used to from a relational database. This amounted to more time that our developers could allocate on high-priority projects. When we first began using MongoDB in summer 2017, we had two collections into production. A year later, that had grown into 120 collections deployed into production, writing 10 million documents daily. Now, each team was able to own its own dependency, have its own dedicated microservice and database leading to a single pipeline for application and database changes. These changes, along with the hours saved not spent refactoring our data model, allowed us to cut our deployment time to minutes, down from hours or even days.



Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik

Daily Tech Digest - December 15, 2019

5 Key Insights From Intel’s New “Accelerate Industrial”

Manager Technical Industrial Engineer working and control robotics with monitoring system software and icon industry network connection on tablet. AI, Artificial Intelligence, Automation robot arm
A technical skills gap stands out as the number one obstacle to a successful digital transformation—flagged as crucial by over a third of respondents. Intel’s report highlights a dramatic shift in the mix of skills needed for success: Manufacturing companies believe the top 5 skills they will need for future growth are all digital skills, from data science to cybersecurity. Manufacturing skills, ranked as today’s second most valuable ability, rank only # 6 when looking at the future. Crucial will be the workforce’s “digital dexterity”, that is the ability to understand both the manufacturing process and the new digital tools. To leverage the full value of digital-industrial innovations, companies will need to truly meld digital technologies into their manufacturing processes, and this requires a workforce fluent in both sets of skills. ... The skills gap represents a tremendous challenge for companies. At the moment, companies are trying to address the gap by setting up training programs in specific digital skills. This, however, will not be enough.



Blood test combined with AI program could speed up diagnosis of brain tumors

Dr Brennan has worked with Dr Matthew Baker, reader in chemistry at the University of Strathclyde, UK, and chief scientific officer at ClinSpec Diagnostics Ltd to develop a test to help doctors to quickly and efficiently find those patients who are most likely to have a brain tumor. The test relies on an existing technique, called infrared spectroscopy, to examine the chemical makeup of a person's blood, combined with an AI program that can spot the chemical clues that indicates the likelihood of a brain tumor. The researchers tried out the new test on blood samples taken from 400 patients with possible signs of brain tumor who had been referred for a brain scan at the Western General Hospital in Edinburgh, UK. Of these, 40 were subsequently found to have a brain tumor. Using the test, the researchers were able to correctly identify 82% of brain tumors. The test was also able to correctly identify 84% of people who did not have brain tumors, meaning it had a low rate of 'false positives. In the case of the most common form of brain tumor, called glioma, the test was 92% accurate at picking up which people had tumors.


Google rolls out Verified SMS and Spam Protection in Android

google-verified-sms-and-spam.png
As the name of the first feature hints, Verified SMS works by confirming the identity of the SMS sender. "When a message is verified-which is done without sending your messages to Google-you'll see the business name and logo as well as a verification badge in the message thread," said Roma Slyusarchuk, a Google Software Engineer on the Messages app. The Verified SMS will only be used to verify the authenticity of SMS messages sent by businesses. It won't verify and add a verification badge to messages sent by normal users. Google said it created this feature to help users trust the messages they receive, especially for "things like one-time passwords, account alerts or appointment confirmations." The Android OS maker didn't explain how the new feature works, but it did say that it should be able to detect SMS messages sent from random numbers, previously not associated with a company, and consequently, help prevent some phishing attacks.


Slow Down to Do More: “Leave room in your schedule for the unexpected” 

One of the biggest problems with rushing through things, both in work and in life, is that it increases the likelihood that you’ll make a mistake. Multitasking is a skill so many people want to fully harness, but the reality is that studies have shown that trying to focus on several tasks at once doesn’t allow you to do any of the tasks as well, and it doesn’t save you time. It can actually waste time because when you switch from one task to another, your brain must refocus. This requires additional time if you’re constantly switching back and forth, compared to if you just focus on one task at a time. In addition, people who rush through their work tend to have higher stress levels, which can lead to more health problems and a lower level of happiness. Finally, we need to find time to take some distance from our work, to take the high ground and just to think. We are constantly consumed by distractions, and when we take the time to break from the norm, and create room for thoughts to ideate, we will be considerably more productive, healthier and happier.


Agile Estimation — Prerequisites for Better Estimates

measuring tape
Someone might consider all aspects of functional requirements, nonfunctional requirements to estimate it as big. Another person might estimate as low without considering nonfunctional requirements like security, performance, etc. It also depends on your delivery best practices, if you consider Unit Test, Automation, Accessibility, device support are part of doneness criteria then the estimate would be different. Of course, I definitely recommend all these best practices are part of your estimation. These best practices are a must for quality and better maintenance. They will cut down the cost in the long run. ... The development team and product management team must be on the same page. The development team must understand the business goals equally with the Product management team. Also, understand the objectives of the product management team and identify must-have requirements for supporting business growth. It will help you to decide the type of architecture foundation required. As per business goals, the expectation is going bigger in terms of size (user, data footprint) in the road map then the architecture will have to be different from the shorter business goals.


Q&A on the Book Building Digital Experience Platforms

Digital Experience Platforms are integrated set of technologies that aim to provide user-centric engaging experience, improve productivity, accelerate integration and deliver a solution in quick time. Digital Experience Platforms are based on platform philosophy so that they can easily extend and be scaled to future demands of innovation, and continuously adapt to the changing trends of technology. Enterprises can have solid integrated foundation for all the applications, which meets the needs of organizations going through digital transformation and provides a better customer experience across all touchpoints. DXPs package the most essential set of technologies, such as content management, portals, and ecommerce, which are necessary to digitize the enterprise operations and play a crucial role in the digital transformation journey. DXPs offer inbuilt features such as presentation, user management, content management, personalization, analytics, integrations, SEO, campaign management, social and collaboration, and search, among others.


4 Robotic Process Automation Trends For 2020

Robotic Process Automation
For a long time, prognosticators have anticipated a future with robots and intelligent elements running the world to the detriment of human laborers. Employment losses, they anticipated, would be unavoidable as AI did things quicker, more brilliant and with less HR headaches. As indicated by the HBR report that concentrated the effect of different RPA implementations demonstrated that supplanting administrative employees was neither the essential goal nor a typical result in 47% of the activities they contemplated. Truth be told, just a bunch of those RPA projects prompted decreases in headcount, and much of the time, the tasks had just been moved to outside workers. RPA bots that are intended to adjust to changing conditions and automatically deal with the correct response quickly. RPA is most normally thought of as a productivity and effectiveness tool. Decreasing or taking out tedious manual procedures is an effectiveness unto itself. RPA and different types of automation will turn into an increasingly obvious piece of data security methodologies, not on the grounds that a multitude of bots will be battling threats on the front lines, but since they can help lessen the most universal risk of all: human mistake.



Angular Breadcrumbs with Complex Routing and Navigation

The UI structure of the breadcrumbs on any serious website looks simple. But the underlying code logic, operation rules, and navigation workflow are not simple at all due to related routing complexities and navigation varieties. This article will demonstrate a sample application with full-featured breadcrumbs and discuss the resolutions of implementing and testing issues. The sample application that can be downloaded with the above links is the modified version of the original Heroes Example from the Angular document Routing & Navigation. I wouldn’t like to reinvent wheels for creating my sample application from scratch. The Heroes Example covers most routing patterns and types, hence, can be a base source for adding breadcrumb features. It, however, is not enough for demonstrating the realistic breadcrumbs with complex navigation scenarios and workflow completeness. The modification tasks involve adding more pages with corresponding navigation routes, changing UI structures and styles, fixing active router link issues with custom alternatives, updating code logic for authenticated session creation and persistence, just to mention a few.


Blockchain Prediction: 2020 Will Enable Levels of Data Trust


It will seem counterintuitive to most CISOs and other security professionals to hear that something public is more secure. Enterprises often prefer to operate in their walled garden and at first will be skeptical of public ledgers. But this stance will change over time. It is somewhat analogous to what happened with intranets and the internet. At first, enterprises only wanted systems connected internally (intranet), but eventually realized the value in connecting to external networks (internet) as well. Interest in blockchain has also germinated a vibrant research community that’s looking into novel cryptographic techniques such as zero-knowledge proofs, trusted computing platforms, verifiable delay functions and other innovative “cryptoeconomic” tools. As this research moves from the lab to the data center, we anticipate that these technologies will make computing more secure and private than ever before.  Security has always been a priority, but more recently privacy. Individuals aren’t in control of their data. From your healthcare data to browsing history, your data is at risk of being exposed or worse, manipulated.


Two Critical Questions for your Enterprise Blockchain Application

question-mark-graffiti
Any data going on a public chain are open, accessible, and irrevocable. Thus, public blockchain is not GDPR (and CCPA from next year) compliant, unless the data has been encoded with quantum-resistant algorithms and stored. Personally Identifiable Information (PII) or sensitive data compromising user privacy should not be stored on a blockchain. However, blockchain still needs account aka wallet addresses to individually link them with their real users ... The performance of software directly depends on the performance of its dependencies and their host environments. Blockchain brings a new paradigm of decentralization architecture, where every node on the chain constantly updates its states to maintain the world state. In addition to that, a blockchain application also needs to deal with the following issues and their varied implementations. ... A blockchain relies on the distributed consensus of participant nodes. The PoW (Proof of Work) consensus takes more time to achieve a consensus across the system based on the finality gadget watermark compared to any PoS (Proof of Stake) system.



Quote for the day:


"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr


Daily Tech Digest - October 12, 2019

Agile Project Management for Distributed Teams

Distribution is a challenging idea.One of the major parts of team development is that every team member should feel unburdened and all of the work should be divided equally. Also, every team member should understand their role perfectly, so as to remove any ambiguity. Every task that is allocated to the team members should be done transparently and not behind closed doors. One thing that the companies are beginning to realize is that increasing the amount of pressure on the team members can exhaust them and that is not good for anyone, because overloaded resources are so fed up that they lose focus and as a result, there is a real decline in productivity. Lastly, when there is so much clarity and transparency in the project development and team communication, agile project management and an agile team deliver the desired results very effectively and they are motivated to acquire the upcoming goal more eagerly. ... Scrum scaling is an amazing procedure in the agile project management model where every project is first evaluated and before scaling the project, a proper infrastructure is put into place to better understand all the elements related to the project.


How Cybercriminals Continue to Innovate

How Cybercriminals Continue to Innovate
Distributed denial-of-service attacks also continue to dominate, Europol says. These are one of the top types of attacks that get reported to European law enforcement agencies because they're aided by the the easy availability of stresser/booter services. "Many banks report that DDoS attacks remain a significant problem, resulting in the interruption of online bank services, creating more of a public impact rather than direct financial damage," the report says. But police have successfully disrupted many major DDoS services ... Security experts say that in the recent past, criminals might advertise their goods and services exclusively on one darknet forum or use the same handle across forums to create better "brand awareness." Today, however, compartmentalization appears to be the name of the game, with criminals creating single-vendor shops or a presence on smaller, Tor-based markets. "Some organized crime groups are also fragmenting their business over a range of online monikers and marketplaces, therefore presenting further challenges for law enforcement," Europol says.


FBI warns about attacks that bypass multi-factor authentication (MFA)


The FBI made it very clear that its alert should be taken only as a precaution, and not an attack on the efficiency of MFA, which the agency still recommends. The FBI still recommends that companies use MFA. Instead, the FBI wants users of MFA solutions to be aware that cyber-criminals now have ways around such account protections. "Multi-factor authentication continues to be a strong and effective security measure to protect online accounts, as long as users take precautions to ensure they do not fall victim to these attacks," the FBI said. Despite the rise in the number of incidents and attack tools capable of bypassing MFA, these attacks are still incredibly rare and have not been automated at scale. Last week, Microsoft said that attacks that can bypass MFA are so out of the ordinary, that they don't even have statistics on them. In contrast, the OS maker said that when enabled, MFA helped users block 99.9% of all account hacks. Back in May, Google also said a similar thing, claiming that users who added a recovery phone number to their accounts (and indirectly enabled SMS-based MFA) improved their account security.


What developers need to know about an Alexa vulnerability

All Alexa virtual assistants automatically transmit all recording data back to Amazon servers. The company saves storage space by retaining certain voice recordings and deleting others at any time. Amazon employees routinely listen to recordings to determine how well Alexa understands requests and improve the service. Recordings are linked with an account number and the user’s first name. Amazon gives users the option to delete their interaction with Alexa, but doesn’t give them the option to prevent Amazon from retaining certain voice recordings. Indefinite record retention implies a lack of private data retention policy for Amazon’s servers. The company decides on the dates the records must be removed from its primary storage systems, not the consumer. A similar security concern also exists in Alexa for Business. Developers use the service to build, test and deploy Jenkins code to the cloud. Just like the aforementioned Alexa vulnerability, developers can delete recordings on their end, but don’t have the option to control what records Amazon may retain.


Web and mobile testing faceoff: Sauce Labs vs. BrowserStack


Both products require users to define how the test data maps to the GUIs involved, and users are generally comfortable with this process in each toolset. Users with specific interest in browser or mobile apps appreciate BrowserStack's approach of different modules for different missions, but those who require both view the segmentation somewhat negatively, so it's smart to know just what UI type the app uses. BrowserStack has strong organizational features. Users can define teams, allocate resources by team, and -- depending on the purchased plan -- do parallel testing. With provided analytics, development managers can review how many tests are run and the pattern of testing, and whether the tests cover the full functionality of the UIs. Sauce Labs also has good team support and team metrics such as data on the rate of changes made, the number of changes and tests run.


Cloud architecture that avoids risk and complexity

Cloud architecture that avoids risk and complexity
Cost is easy. You can spend ten times what you need to, to solve the same problem. Typically, the architecture team layers on more technology than necessary or doesn’t take advantage of cloud-native features. This means that the applications burn ten times more public cloud resources.  Often I come upon disturbing realities, such as a technology being used because of an existing enterprise license agreement with that technology provider, which really means “funny money” that needs to be spent. Risk is another core factor and is not as easy to spot as cost. Overengineering of the cloud solution can cause additional unnecessary complexity, which can lead to more attack surfaces for hackers and the additional likelihood that data on premises or in the cloud will be breached.  I often use the phrase “you’re not that good” to describe the fact that the more technology you have, the more complexity, cost, and risk you also have. If you think about it, most major breaches have been caused by some neglect that led to a vulnerability.


How to Stop Superhuman A.I. Before It Stops Us


The problem is not the science-fiction plot that preoccupies Hollywood and the media — the humanoid robot that spontaneously becomes conscious and decides to hate humans. Rather, it is the creation of machines that can draw on more information and look further into the future than humans can, exceeding our capacity for decision making in the real world. To understand how and why this could lead to serious problems, we must first go back to the basic building blocks of most A.I. systems. The “standard model” in A.I., borrowed from philosophical and economic notions of rational behavior, looks like this: “Machines are intelligent to the extent that their actions can be expected to achieve their objectives.” Because machines, unlike humans, have no objectives of their own, we give them objectives to achieve. In other words, we build machines, feed objectives into them, and off they go. The more intelligent the machine, the more likely it is to complete that objective.


Volusion Payment Platform Sites Hit by Attackers

"The most obvious threat actor that is currently famous for card skimming and compromising ... e-commerce websites is Magecart, which has the history of using Vultr Holdings data centers (just live Volusion-Cdn[.]com) and using public cloud storage to host their malicious scripts," Afahim says. Afahim discovered the attack against the check-out site for Sesame Street Live this week, although these incidents could have started as far back as Sept. 12. The payment function for the Sesame Street Live online store remained offline Wednesday. On Thursday, a spokesperson for Volusion told Information Security Media Group that the attacks had been stopped within a few hours of the company being notified, but that an investigation was still underway. "A limited portion of customer information was compromised from a subset of our merchants. This included credit card information, but not other associated personally identifying details ..." the spokesperson says.


Mind-reading systems: Seven ways brain computer interfaces are already changing the world


A collaboration between researchers -- including neuroscientists, biomedical engineers, and musicians -- has been looking at the potential for BCIs to be used with music. They are working on a system that could analyse a person's emotional state using their neural signals, and then automatically develop an appropriate piece of music. For example, if you're feeling down, the system's algorithms could write you a piece of music to help lift your mood.  The system has been tested on healthy volunteers, as well as on one individual with the neurodegenerative condition Huntington's disease, which causes depression and low mood. "Part of the reason someone might have a music therapy session is because they have trouble understanding their own emotions or expressing their own emotions, so the idea is to use music and the skills of the therapist, and potentially this device is better in helping them understand their emotions," says Ian Daly, lecturer at the University of Essex's School of Computer Science and Electronic Engineering.


Author Q&A on the Book Software Estimation Without Guessing


Much of the trouble with estimating is not estimation itself, but the communications, or lack thereof, between people. If you don’t know how an estimate is going to be used, it’s likely to be the wrong estimate for the situation. If you fear an estimate for one use is going to be misused for another, then you’ll likely develop an estimate that doesn’t satisfy either need. ... In one sense, there is only one way to estimate. That’s by comparing the unknown to the known. There are, of course, many ways to do that comparison. You might conceptually break the unknown down into smaller pieces that are easier to compare. You might build a model that encapsulates the comparison based on measurable attributes of the planned work. ... The one thing you know about an estimate is that it is going to be wrong to some degree. How wrong and wrong in what way are the more important questions. Making an early prediction and then trusting it for a long time seems like a foolish strategy. Estimates have a limited shelf-life. If you’re going to make a long-term estimate, you should also make some shorter-term interim estimates that you can use to check your assumptions.




Quote for the day:

"The task of leadership is not to put greatness into humanity, but to elicit it, for the greatness is already there." -- John Buchan