window vs linux

Discussion – Why Linux servers are more popular than Windows servers?

A server is a computer program or a device that provides a service to another computer program and its user, also known as the client. Servers are often dedicated to carry out hardly any other tasks apart from their server tasks.

Divided into categories, there are kinds of servers that include file servers, database servers, print servers, and network servers.

Theoretically, whenever computers share resources with client machines they are considered servers.

Speaking of servers, there are two main web-hosting services on the market, Linux and Microsoft Windows.

Linux is an open source software server that is why it is cheaper and easier to use than a Windows server. Windows server (a Microsoft product) is often subject to charges so one must pay for the operating system and a periodic use license.

But for many companies, profit is worth the price with Windows server, because it generally offers more range and more support than Linux servers.

So how did we conclude that Linux is far more popular than Window servers?

For more evidence, let us see the statistics.

For Web Servers

According to W3Cook, Linux powers the servers that run 96.5% of the top one million domains in the world (as ranked by Alexa).

The statement was even further discussed by W3Techs claiming that Linux powers around 70% of the top 10 million Alexa domains. Windows controls the remaining 30%.

For Supercomputers

Linux utterly dominates the list of the top 500 most powerful supercomputers in the world.

Tl; dr;

Compare to Windows, Linux maintains a noticeable lead.

Linux isn’t just common on laptops, mobiles, and servers, because governments from all over the world have been using it in their military operations and educational systems.

It is far more secure and is the only OS used by TOP500 supercomputers. Further, it is designed to handle demanding business requirements like system administration, networking and database management.

Linux: Fast with High Security

linux mascot

Source: Wccftech

Linux is a family of free, open source software operating systems built around the Linux kernel.

This was invented by Linus Torvalds in 1991, who was then a student at the University of Helsinki in Finland with the commencement of his personal project to create a new free operating system kernel.

According to studies by the Linux Foundation and SUSE, Linux is fast-becoming the operating system (OS) of choice for many organizations that operate servers that host cloud and big data applications.

Different industries including finance, healthcare, military, government, internet, etc. have been using it to manage operations. Also, Startups and smaller businesses often choose the open-source OS for their operating servers because much of the distributions are freely available.

Something that makes Linux to stand out

It is a free and open source.

The first truly free Unix-like operating system, Linux is completely an open source project. There is no commercial vendor trying to lock users into certain products or protocols. Thus, businesses are free to mix and match and choose what works best for their needs so getting a genuine copy of a Linux distro (i.e. Ubuntu, Fedora) is absolutely free.

No wonder governments, organizations and major companies like Google, Amazon, and Netflix, are using the open source operating system in their own production systems.

It is more secure.

Linux based operating systems are secure and suitable for servers. It implements a variety of security mechanisms to secure files and services from numerous attacks and abuses.It highly restricts influence from external sources (i.e. users, programs or systems) that can possibly destabilize a server.

Integrated control features include centralized identity management and Security-Enhanced Linux (SELinux), mandatory access controls (MAC) on a foundation that is Common Criteria and FIPS 140-2-certified.

It has better control.

The open source project allows a certain business to employ multiple vendors – effectively avoiding what is called the vendor lock in. Plus, system admins have powerful tools at their disposal, such as how to use systemctl.

It is more stable and reliable.

Unix-based, Linux systems are widely known for their stability and reliability. Many Linux servers on the Internet have been running for years without failure or even being restarted.

Linux systems are stable due to lots of determinants which include management of system and programs’ configurations, process management, security implementation etc.

With Linux, you can modify a system or program configuration file and effect the changes without necessarily rebooting the server, which is not the case with Windows:

  • When a software is installed you must REBOOT
  • When you recently uninstall software you need to REBOOT
  • If you just installed a Windows update, REBOOT
  • When the system seems to slow down, REBOOT then.

Linux helps your system run smooth for a longer period of time.

In case a process goes unstable, you can send it an appropriate signal using commands such as kill, pkill and killall.

It is perfect for businesses.

For enterprises,Linux let them receive vulnerability security updates continuously from the upstream community itself or a specific OS vendor, which remedies and delivers all critical issues by next business day, if possible, to minimize business impact.

In addition, it automates regulatory compliance and security configuration remediation across business system and within containers with image scanning like OpenSCAP that checks, remediates against vulnerabilities and configuration security baselines.

It is more private.

Privacy concerning Microsoft’s Windows 10 does not look convincing and has already received an enormous criticisms of how it collects data, having you to put off all spying modules in its privacy setting.

On the other hand, Linux distributions do not collect much data or completely none. In addition, you don’t need to have additional tools to protect your privacy.

It has better community support.

Linux forums provides better solutions because you let others to solve your problem. Just post a query on some of the Linux-related forum threads on the web and expect tons of replies with a detailed solution. So no need to hire an expert and lose a fortune.

Summary

Even though Linux’s edge has been already discussed above, still you must need a deeper understanding between Linux systems and Windows.

Try to cite each pro’s and con’s, balance everything in different perspectives.

Also, you can work across platforms with Windows and Linux, to better experience each of their capabilities.

By Tuan Nguyen

man in the matrix

Technology review – Automatic programming

“Write code that writes code…”

This is the one of the most interesting arguments found in the book, The Pragmatic Programmer.”

Programmers too have wondered if there are ways of simplifying the coding process.

Imagine having predefined lines of code like headers, libraries, and constructors written already.

Is it achievable?

It is believed that there’s already an automation of programming way back then but it is not meant with today’s generation of programs.

Tl; dr;

Automatic Computer Programming or simply automatic programming is a type of computer programming wherein a program code is automatically generated by another program based on certain specifications.Meaning, a program that writes more code is written, which then goes on and creates more programs.

It is a combination of Artificial Intelligence and Compilers techniques.

The Automatic Programming

automatic programming principle

Source: cs.utexas.edu

The argument about whether or not “a code that writes a code” is actually true and it is called, Automatic Programming.

Way back 1940s, there’s already a code automation, BUT it is not what you think it is.

Automatic programming then was about automating the manual process of paper-tape punching which were the programs of punched card machines.

Later on, it meant the translation of high-level programming languages such as Fortran and ALGOL into low-level machine code.

The concrete definition of today’s automatic programming is like an automated approach to programming where the end-user specifies certain high-level specifications(easily understood by humans) and the program converts it into machine executable code.

Techniques

AI techniques and Compilers help for the completion of this type of programming:

Artificial Intelligence.

Whether by humans or by machine, writing programs is based on knowledge of algorithms, data structures, design patterns.

  • Needed to represent, find, and instantiate design patterns.
  • Search may be needed in finding a combination of components to accomplish the desired task.

Compilers.

Representing the code in the form of Abstract syntax trees, compiler techniques are used in generating and manipulating programs. Techniques such as code optimization are used so the generated code is efficient in all aspects.

  • It is better to know what optimizations a compiler can perform, so the program generator does not need to duplicate those.
  • As used by compilers, the central representation of a program is the Abstract Syntax Tree (AST).
  • Lisp code can be viewed as a kind of AST.

Categories

Automatic programming can be divided into two (2) categories:

  •  Generative Programming. It is where standard libraries are used to improve the efficiency and speed of programming. The programmer does not need to re-implement it or even need to know how it works. For instance glut.h a graphics library for C++ for easily implementing OpenGL programs.
  • Source Code Generation. A real interest to AI researchers, source code is generated based on a model or template which is made through a programming tool or an integrated development environment (IDE). One good example is the Google/MIT App Inventor where users simply need to drag and drop functions that they want and then visually connect them to each other to define how the app works without ever typing any lines of code.

Popular Uses of Automatic Programming

Microsoft’s T4 (Text Template Transformation Toolkit), consists of template code which users see at design time and then this code is converted to output code at run-time saving them the overhead of writing all the code manually.

Accelo a code generator for eclipse is used in generating text-representation in languages like (PHP, Python, Java, etc.) from Eclipse Framework models defined in UML. Actifsource is a plugin for Eclipse that allows graphical modelling using templates.

Summary

In the future, programmers won’t need to write code anymore and this task would be fully automated.

There’s no reason to panic about this.

The trade would still continue but it would just develop from specific problem solving to general problem solving.

Additionally, domain knowledge varies in complexity and so far only humans are capable of devising solutions to specific problems.

By Tuan Nguyen

apache vs nginx

Technology review – Apache vs Nginx

The two most popular open source web servers worldwide, Apache and Nginx are responsible for serving over 50% of traffic on the internet.

Even though these software are quite the same, in terms of handling requests, analyzing them, and then sending back the corresponding documents to be viewed in a visitor’s browser, they should not be regarded entirely interchangeable.

Each excels in its own way in managing diverse workloads and working with other software to provide a complete web stack.Also, it is important to understand the situations wherein you might need a thorough evaluation in choosing your web server.

Let us dig deep into the differences between Apache and Nginx and how each server stacks up in several areas.

Tl; dr;

Apache and Nginx are deemed equally close to the pinnacle of the popularity chart as web servers. Both are used by large Fortune 500 companies around the globe and are responsible behind the success of more than half of the websites currently on the Internet.

Most oftenly, Nginx is compared to Apache due to its similar open-source philosophy.

In general, their main difference lies in their design structure.

Apache uses a process-driven approach creating a new thread for each request while, Nginx uses an event-driven architecture handling multiple requests within one thread.

Apache

apache logo

Logo: Apache Software Foundation // Composition: ZDNet

Created by Robert McCool in 1995, the Apache HTTP Server has always been the most popular server on the internet since 1996 and has reached the 100-million website milestone in 2009. It powers up to 52% of all websites worldwide.

This web server is commonly picked by administrators for its flexibility, power, and widespread support, having extensive dynamic loadable module system and can process a large number of interpreted languages without the need of connecting out to a separate software.

Due to its popularity, Apache benefits from great documentation and integrated support from other software projects.It is commonly used in the LAMP (Linux, Apache, MySQL, PHP) web stacks found on shared hosting servers and has a rich set of features that can be enabled by installing one of the 60 official modules.

Nginx

nginx logo

Source: NGINX

An answer to the so-called C10K Challenge – “How do you design a web server which can handle ten thousand concurrent connections?,” Nginx (pronounced “EN-jin-EKS”, or “Engine X”) was developed in 2002 by Russian developer Igor Sysoev but was then publicly released in 2004.

As Owen Garrett, Nginx’ project manager said, “Nginx was written specifically to address the performance limitations of Apache web servers.”

This simply explains that compared to earlier servers such as Apache, Nginx is innovated in using an asynchronous, event-driven architecture and uses the LEMP stack (Linux, “En-juhn-ex,” MySQL, PHP).

In March 2019, NginxInc was acquired by F5 Networks for $670 million and at that point, as Techcrunch stated, Nginx server was powering “375 million websites with some 1,500 paying customers”.

Battle Ground: Apache vs. Nginx

Growing popularity. More than 65% of websites were based on Apache up until 2012 and is greatly regarded as among the first software that pioneered the growth of the World Wide Web (WWW) – its popularity is gained through this historical legacy.

But that dramatically changed.

The popularity gap between Apache and Nginx is closing very fast. According to W3Tech.com, as of January 14, 2019, Apache (44.4%) is just slightly ahead of Nginx (40.9%) in terms of websites using their servers.

server popularity ranking

Fastness. A good web server should run at great speed and can respond to connections and traffic anywhere easily.

In terms of server speeds, a documented experiment was made on two popular travel websites based on Apache (Expedia.com) and Nginx (Booking.com).

Measured via an online tool called Bitcatcha, the comparisons were made for multiple servers and measured against Google’s benchmark of 200 ms.

The results were astounding.

Booking.com (Ngnix) was rated “exceptionally quick,” whereas Expedia.com (Apache) was rated “above average and could be improved.”

Security level. In terms of security, you can guarantee great protection from DDoS attacks, malware, and phishing because both take security measures seriously on their websites. The two release security reports and advisories periodically.

Concurrency features.Another test was conducted and again compared on Booking.com (Nginx) with Expedia.com (Apache) based on stress tests at Loadimpact.com.

As for 25 virtual users, the Nginx website was able to record 200 requests per second, which is 2.5 times higher than Apache’s 80 requests per second.

Therefore, for a dedicated high-traffic website, Nginx is definitely a good choice.

Versatility. A good web server should be flexible enough to allow customizations. Apache does it quite well using .htaccess tools that allows decentralization of administrator duties, which Nginx does not support. That’s why Apache is far more popular with shared hosting providers.

Summary

Overall, we can all say that Nginx narrowly wins the match 2-1, but these comparison on technical parameters can’t definitely suggest the full picture. Thus, both web servers are useful in their own ways.

Apache has .htaccess file that is more convenient, with lots of dynamic modules. Ngnix is popular for VPS hosting, dedicated hosting, or cluster containers.

By Tuan Nguyen

xml heart tag

Discussion – Why JavaScript is loved by developers?

A flexible and powerful programming language, JavaScript has been consistently implemented by various mobile sites, games, and web applications. Now, it has become a core component of web technology along with HTML and CSS.

According to numerous sources, including the Stack Overflow’s Annual Survey of 2018, JavaScript is the most commonly used programming language.

Stated in this same 2018 developer survey, “For the sixth year in a row, JavaScript is the most commonly used programming language.” Also, having to mention most commonly used libraries, frameworks, and tools, JavaScript-based technologies like Node.js, AngularJS, React top the list.

popular technology list

Source: Stack Overflow

Another survey revealed that JavaScript is used by 88% of all websites.

With all the success, it was never meant to become the cornerstone of modern web development. In fact during the 90s, the language was just created in less than 2 weeks, with a very different purpose in mind – running on both client and server.

Back then, it wasn’t a complete success though.

It took years to be taken seriously as a backend language, but it had rapidly thrived on the frontend, eventually becoming the standard programming language of the web.

Tl; dr;

While HTML is for structure and CSS is for style, JavaScript provides interactivity to web pages in the browser.

The name of JavaScript was derived in an attempt to ride the wave of Java’s popularity and speed up adoption.

You will not find much similarities between both languages today.

With the creation of JavaScript, it is possible to satisfy different audiences.

The first are the component writers and enterprise-level professionals with Java, while the second one are scripters and designers with JavaScript.

This second group, whom we could also call in modern web development terms, frontend developers.

Why do developers love JavaScript?

javascript features

Source: TutorialAndExample

A scripting language that’s inserted directly in the HTML of a page, JavaScript is the only programming language of its kind that can be understood by web browsers. We will have to wait for WebAssembly to be more mature to be on par with Javascript.

With JavaScript, browsers can read, interpret and then run the program. Thus, creating powerful client-side experiences.

Here are few things you see every time you spend at least 2 minutes on a web browser that are the returned result when searching for the programming language:

  • Autocomplete;
  • Loading new content or data unto the page without reloading the page;
  • Rollover effects and dropdown menus;
  • Animating page elements (i.e. fading, resizing or relocating);
  • Playing audio and video; and
  • Validating input from forms.

Knowing that web servers run on different languages such as Python, PHP, Ruby, Java or .NET, JavaScript is compatible with other languages.

The importance of JavaScript

JavaScript has wide-ranging importance, that’s why it is popular among web developers:

  • JavaScript is supported by multiple web browsers including Google Chrome, Internet Explorer, Firefox, Safari, and Opera etc.
  • It has lots of frameworks and libraries, making it easy for web developer to build large JavaScript-based web applications.
  • With just opening a notepad, you can write your JavaScript code – easy to write, without using any specific tool.
  • With Google implementing many optimization techniques to increase the loading speed of the mobile web pages, it requires developers to use JavaScript for optimizing websites for mobile devices for Accelerated Mobile Pages (AMP)
  • An interpreted programming language, JavaScript still simplifies development of complex web applications, allowing developers to simplify the application’s composition.
  • As most developers do responsive web design to make a website accessible and work well across multiple browsers and devices such as smartphones and desktops, it can be used in optimizing web pages for mobile devices.
  • Relatively fast for the end user, visitors don’t need to fill out an entire form and submit it in order to be told they made a typo in the first field or fill the complete form again. JavaScript provides immediate feedback when they made a mistake.
  • JavaScript language is simple for developers to learn, having its syntax similar to English.
  • It has third party add-ons, helping developers write snippets that can be utilized on the necessary web pages.
  • With JavaScript, code is executed on the user’s processor instead of the web server, saving developers bandwidth and reducing extra load of the server.

Summary

JavaScript a mainstream programming language and most loved by various types of developers, helping them build large scale web applications easy and quick.

It also enhances speed, performance, functionality, usability, and features of the application without any hassle, delivering optimal user experience across various devices, browsers, and operating systems.

Additionally, JavaScript has libraries, frameworks, and tools as per the requirements of the projects.

By Tuan Nguyen

illustration of binary code

Discussion – Should we still use low level programming languages?

Abbreviated as LLL, Low Level Language is a type of programming language that contains basic instruction recognized by a computer.

It is often cryptic and not human-readable, compare to high-level languages used by software developers.

The word “low” refers to the small or nonexistent amount of abstraction between the language and machine language, making LLL the type of language described as being “close to the hardware” sometimes.

Since instructions written in low level languages are machine dependent, a low-level programming language interacts directly with the registers and memory.

The programs developed using low level languages are machine dependent and are not portable.

Tl; dr;

Used to write programs relating to the specific architecture and hardware of a particular type of computer, low level languages (LLL) are closer to the native language of a computer (binary, or ternary), making them harder for programmers to understand.

They are directly designed in operating and handling the entire hardware and instructions set architecture of a computer and are more appropriate in developing new operating systems, device drivers, applications, or writing firmware codes for micro-controllers.

What are low level programming languages?

level of programming languages

Source: mathworks.com

Low-level programming languages’prime function is to operate, manage, and manipulate computing hardware and components. It is much less human readable, yet easier and faster for computers to understand.

The programs and applications written in low-level programming are executable on the computing hardware without any interpretation or translation directly.

Though it is harder for programmers to write, debug, and maintain low-level programs, it is much efficient since it runs faster as it is close to the machine language and takes up less footprint memory in your computer.

It has two (2) common types:

Machine language.

The closest language to the hardware, machine language consists set of instructions directly executed by the computer.

These set of instructions are a sequence of binary bits, each of which having to perform a very specific and small task.

For example:

SUB AX, BX   ;00001011 00000001 00100010

is an instruction set to subtract values of two registers AX and BX.

Generally, instructions written in machine language are machine dependent and varies from computer to computer.

From the early days of programming developing programs using this language is a tedious job, having to remember sequence of binaries for different computer architectures. Today, it is not much in practice. 

Assembly language.

An improvement over machine language, assembly language is similar to machine language as it also interacts directly with the hardware. The only difference is that instead of using raw binary sequence to represent an instruction set, this language uses mnemonics.

Mnemonics are short abbreviated English words specifying a computer instruction. Each instruction in binary has a specific mnemonic. They are architecture-dependent and there is a list of separate mnemonics for various computer architectures.

Examples of mnemonics include – ADD, MOV, SUB etc. – easy to remember than binary sequences.

The language uses a special program called assembler. An assembler translates mnemonics to a specific machine code.

Its advantages are the following:

  • Making use of special hardware or special machine-dependent instructions (e.g. on the specific chip);
  • Requires less memory with translated program;
  • Writes code that can be executed faster;
  • Has full control over the code; and
  • Works directly on memory locations.
Machine Language 1010 0011 0001 1001
Assembly Language ADD R3, R1, R9

Source: Abstraction Level Taxonomy of Programming Language Frameworks by Dr. BrijenderKahanwal

Advantages of Low Level Programing Languages

  • Programs are fast and memory efficient when developed using low level languages.
  • Programmers can better utilize processor and memory.
  • There is no need of any compiler or interpreters in translating the source to machine code. Therefore, cutting off the compilation and interpretation time.
  • Provides direct manipulation of computer registers and storage.
  • Can directly communicate with hardware devices.

Disadvantages of Low Level Programing Languages

  • Programs are machine dependent and are not portable when developed using low level languages.
  • Difficult to develop, debug and maintain.
  • Programs developed by low level programs are more prone to error.
  • Low level programming usually results in poor programming productivity.
  • Programmers must have additional knowledge of the computer architecture of a particular machine for programming in low level language.

Should we still use this type of programming language?

low level language vs high level language

Source: Medium

Summing it up, it depends upon what do you want to program.

Low-level languages are closer to the native language of a computer (binary) has important benefits because they require little interpretation by the computer. Thus, they generally run very fast, giving programmers a total control over data storage, memory, and retrieval.

However its counterpart, the high level languages, are written in a form close to human language. They are intuitively easier to grasp allowing programmers to write code much more efficiently. Also, they considered to be safer as they have more safeguards in place, keeping coders from issuing poorly written commands that could cause damage. The downside is, they don’t give much control over low-level processes.

If you’re planning to write on operating systems, kernels, or anything that needs to run at the absolute highest speed possible, low level language might be the best choice.

But, numerous modern apps are written in higher-level or even domain-specific languages.

So you might consider learning both language instead.

By Tuan Nguyen

software development methodology

Software Development Methodology – Crystal

Crystal or also known as Crystal Methods are a family of methodologies (the Crystal family) developed by Alistair Cockburn from his study and interviews of teams in 1998.

The word Crystal comes from a gemstone, where in software terms, the faces (representing the techniques, tools, standards and roles) are a different view on the “underlying core” of principles and values.

It primarily focuses on people and their interactions when they work on a certain software development project rather than on processes and tools.

Plus, it also centers on business-criticality and business-priority of the system under development.

Crystal methods are said to be “lightweight methodologies”, promoting early, frequent delivery of working software, adaptability, high user involvement, and the removal of bureaucracy or distractions.

As based straight from Cockburn’s words, “Crystal is a family of human-powered, adaptive, ultra light, ‘stretch-to-fit’ software development methodologies.”

crystal

Source: ActiveCollab

Tl; dr;

Crystal is a collection of Agile software development approaches introduced by Alistair Cockburn in 1998. The concept focuses on communication, with special emphasis on interaction, community, people and skills.

Cockburn believes that the people’s skills and talents, as well as the way they communicate has the biggest impact on the project outcome.

He adds that we should view the product development as a game which should stimulate everyone to interact, become creative and produce brilliant ideas. Instead of focusing on questions like “is our model accurate?” we should be looking for answers to the questions like “Is our product meeting the customer’s needs? Or “Do we have our goals aligned as a team?”

The Crystal Methods

crystal lifecycle

Source: A Practical Guide to Seven Agile Methodologies Part 2

One of the most lightweight, adaptable approaches to software development, Crystal methodology is based on two (2) fundamental assumptions:

  • Teams can streamline their processes as their work and become a more optimized team.
  • Projects are unique and dynamic and require specific methods.

 

Crystal method is actually comprised of a family of agile methodologies in different variants (colors), each of which has unique characteristics driven by several factors such as team size, system criticality, and project priorities. The following are just some of the practically used crystal methodologies in real projects:

  • Crystal Clear – for teams consisting of 6 people.
  • Crystal Yellow – for teams consisting of 10-20 people.
  • Crystal Orange – for teams consisting of 20-40 people.
  • Crystal Red – for teams of up to 80 people.
  • Crystal Maroon – for teams of up to 200 people.

Defining the Crystal Method Characteristics

  • Human-powered. This simply emphasizes that people are the most vital element of Crystal, and all the processes and tools are relative to them. It also means that people are capable of organizing themselves. As the processes develop, they can become more organized and competent.
  • Adaptive. An approach and not a set of prescribed tools and techniques for software development, Crystal is a stretch to methodology. This means that processes and tools are not fixed, but rather adjusted in order to meet the requirements of the team and the project at hand.
  • Ultra-light. Not advocating too much documentations, management, and reporting, Crystal is known as a “lightweight methodology.” It focuses on practicing open communication between team member and transparent workflow between the team and the client.
  • The Crystal Method Practices: Certain practices are crucial for the successful implementation on any project. Involving preciseness, Crystal has a number of practices that include:
  • An iterative and incremental development approach. For the overall refinement and completion of the software, the project is developed in iterations that are generally time boxed; user feedback taken at the end of an iteration is used to plan the next iteration; and new and additional features are added in every subsequent iteration.
  • Active user involvement. With Crystal’s people-centered and transparent nature, users are not only actively involved but also regularly informed regarding the project’s progress.
  • Delivering on commitments. The team strives to secure frequent delivery of client-valued, potentially-shippable functionalities.

7 Properties of Crystal Method

Between all methods in the Crystal family, given below are the seven (7) prevailing common properties:

  • Frequent Delivery. Allows teams to frequently deliver working, tested code to real users. This way, they don’t need to realize that they have invested their energy and time into a product that nobody wants to buy.
  • Reflective improvement. There are always techniques and methods where your team can improve a product, no matter how well or bad it has become.
  • Close or osmotic communication. Enables teams to pick up useful information without even being directly involved in the discussion of the certain matter.
  • Personal safety. In order to build a healthy working atmosphere and genuine team culture, team members should practice an open and honest communication whether presenting a new idea or possible problem, without fear.
  • Focus. Each team member knows exactly what to work on, allowing them to focus their attention and avoiding switching tasks one after another.
  • Easy access to expert users. Allows teams to maintain communication and get regular feedback from real users.
  • Technical environment with automated tests, configuration management, and frequent integration. Very specific tools for software teams emphasizing continuous integration so that errors could be caught in just a matter of minutes.

Summary

The Crystal approach considers people as the most important, thus, processes should be modeled to meet the requirements of the team. It has an iterative and incremental development approach, an active user involvement, and delivers on commitments.Also, it is adaptive, without a set of prescribed tools and techniques and doesn’t requires too much documentation, management or reporting.

 

If you enjoy reading about software development methodologies, let’s take a look at other blog posts.

Software Development Methodology – Extreme Programming

Software Development Methodology – Scrum

Software Development Methodology – Kanban

Software Development Methodology – Lean

Software Development Methodology – Dynamic Systems Development Method (DSDM)

Software Development Methodology – Feature Driven Development (FDD)

By Tuan Nguyen

software development methodology

Software Development Methodology – Feature Driven Development (FDD)

Client-centric, architecture-centric, and pragmatic, Feature Driven Development (FDD) is an agile framework that primarily focus on the feature set that the client values, and is known for short iterations and frequent releases.

FDD was first introduced in 1999 in the book Java Modeling In Color with UML while its first real-world application was on a 15 month, 50-person project for a large Singapore bank in 1997, which was then immediately followed by a second, 18-month long 250-person project.

The term, “feature” emphasized in the FDD context, doesn’t necessarily mean product features, rather, they are more akin to user stories in Scrum. Thus, “completing the login process” might be considered a feature in the FDD methodology.

Not many talk about FDD, though it is often mentioned in passing in agile software development books and forums.

people plan a project

Source: DevPro Journal

Tl; dr;

Feature Driven Development is a customer-centric software development methodology built largely around discrete “feature” projects.

With this concept, developers can plan and manage each stage of project development to keep prioritizing client requests, responding to requests in time and making clients satisfied.

This is done through mapping out what features developers are capable of creating, breaking complex requests into a series of smaller feature sets and then creating a plan for how to complete each goal over time.

The Feature Driven Development Methodology

feature driven development lifecycle

Source: New Line Technologies

Suitable forprojects with large development teams, follow pre-defined standards and require quick releases, FDD focuses on short iterations with each of which serves to workout certain part of the system’s functionality.

FDD concept’sdevelopment is credited to Jeff De Luca and Peter Coad, when working on a banking project in Singapore way back 1997.

The project through the FDD concept is divided into “features”, known to be small pieces of a complete project.

Feature Driven Development Roles

FDD has six (6) key roles:

  • Project Manager (PM). The administrative head of the project. He does reporting progress, handling budgets, and managing equipment, space and resources, etc.
  • Chief Architect (CA). The one responsible for the overall design of the system.
  • Development Manager (DM). Leads day-to-day development activities.
  • Chief Programmers. Are experienced programmers who’ve been through the entire software development lifecycle a few times before.
  • Class Owners. Are developers working as members of small development teams under the guidance of the Chief Programmer.
  • Domain Experts. They are users, sponsors, business analysts, or a combination of these, known to be the knowledge base that the developers rely on, enabling them to deliver the correct system.

Other supporting roles, include:

  • Release Manager. Ensures that Chief Programmers report progress weekly. He then reports directly to the Project Manager.
  • Language Guru. Responsible for knowing a programming language or a specific technology inside out.
  • Build Engineer. The one who sets up, maintain, and run the regular build process.
  • Toolsmith. The one responsible behind the creation of small development tools for the development team, test team, and data conversion team.
  • System Administrator. A person responsible in configuring, managing, and troubleshooting any servers and network of workstations that are specific to the project team.

Additional roles are as follows:

  • Tester. Independently verifies that the system’s functions meet the users’requirements and that the system performs those functions correctly.
  • Deployer. Converts existing data to the new formats as required by the new system and work on thephysical deployment of new releases of the system.
  • Technical Writer.The one who writes and prepares online and printed user documentation.

Feature Driven Development Processes

FDD has five (5) basic processes steps:

  • Developing an overall model. Cross-functional, iterative and highly collaborative. FDD pushes team members to work together to build an object model of the domain area as guided by the Chief Architect. When detailed domain models are created, these models are progressively merged into an overall model after.
  • Building the list of features. By using the model on the previous step, the team or the chief programmer builds a list of features that would be useful to users and could be completed along a set timeline for release.
  • Planning by feature. It’s all about organizing. Here, plans are laid in which order the features will be implemented. Teams are then selected and assigned feature sets.
  • Designing by feature. On this stage, the chief programmer chooses the features for development and assigns them to feature teams consisting of the project manager; the chief architect; the development manager; the domain expert; the class owner; and the chief programmer.
  • Building by feature. Feature teams have complete coding, testing, and documentation of each feature, then advance the feature to the main build.

Best practices

Feature Driven Development is built around overall software engineering best practices:

  • Identify the domain object model, or the scope of the problem that needs to be solved.
  • Break down complex features into smaller functions and subsets.
  • Assign features to a single owner in order to ensure consistency and code integrity.
  • Build dynamic and diverse feature teams to collect multiple design options.
  • Perform routine code inspections of each feature before implementing into the main build.
  • Enforce project visibility through frequent, accurate progress reports during all steps.

Summary

An iterative and incremental software development methodology, Feature Driven Development (FDD) aims to develop high-level features, scope and domain object model and then utilize that to plan, design, develop and test the specific requirements and tasks upon the overarching feature that they belong to.

This concept is ideal for projects that have large development teams, follow pre-defined standards and require quick releases.

 

If you enjoy reading about software development methodologies, let’s take a look at other blog posts.

Software Development Methodology – Extreme Programming

Software Development Methodology – Scrum

Software Development Methodology – Kanban

Software Development Methodology – Lean

Software Development Methodology – Dynamic Systems Development Method (DSDM)

Software Development Methodology – Crystal

By Tuan Nguyen

software development methodology

Software Development Methodology – Dynamic Systems Development Method (DSDM)

DSDM, or also known as Dynamic Systems Development Method is an agile project delivery framework, addressing the full project lifecycle and its impact on the business, including the guidance needed to bring a product through the entire project, and even the releases.

The method has a four-phase framework, namely:

  • Feasibility and business study;
  • Functional model / prototype iteration;
  • Design and build iteration; and
  • Implementation.

It is an iterative, incremental approach that was first conceived in 1994 when project managers using another agile framework, the Rapid Application Development (RAD) methodology, determined that the new iterative approach to software development should need more governance and stricter guidelines.

team meeting

Source: Emotive Brand

Tl; dr;

Dynamic Systems Development Method (DSDM) is a framework largely based around Rapid Application Development (RAD). It focuses on Information Systems projects that are characterized by tight schedules and budgets.

The method’s primary aim is to deliver business needs and real-time benefits. Also, DSDM also make sure that benefits are clear, has feasible solution, with solid foundations already in place before a project is started.

DSDM Project Delivery Method

dsdm lifecycle

Source: Methods & Tools

DSDM is an agile development type that prioritizes schedule and quality over functionality.

It uses the MoSCoW method of prioritization, breaking a project into four (4) different types of requirements:

  • Must have (M) – requirement critical to a project’s success.
  • Should have (S) – possible requirements that represent a high-priority item that should be included in the solution.
  • Could have (C) – less critical requirements and often seen as nice to have items.
  • Won’t have (W) – least-critical and as what the name suggests will not be covered in the project timeframe.

DSDM Principles

This method is most frequently applied for software development projects, with suitability in any industry and any project (big or small).

It has eight (8) principles:

  • Focus on the business need. The team should understand business priorities and commit in delivering at least the Minimum Usable Subset.
  •  Deliver on time. The team splits up the work into increments prioritizing project requirements; and protecting the deadlines to ensure that the project is delivered on real-time. For long-term projects, they are delivered on-time using the on-time delivery of each increment, or Timebox.
  • Collaborate. Successful collaboration is achieved through partnering with the right stakeholders. Thus, improving the whole team’s performance.
  • Never compromise quality. The desired quality of the project products has been already agreed by the team even at the beginning of the project by defining the acceptance criteria.
  • Build incrementally from firm foundations. Before any significant resources are dedicated to the project delivery, the team builds a solid understanding of the project requirements and proposed solution to create a strong foundation. After each delivered increment, priorities and ongoing project viability are reassessed.
  • Develop iteratively. Having results demonstration and business feedback after every iteration, teams encourage creativity, learning, and experimentation through iterative development.
  • Communicate continuously and clearly. As informal communication encouraged, daily stand-up meetings and workshops are conducted so that communication needs of the project are fulfilled.
  • Demonstrate control. Project managers should conduct planning and progress tracking because they are considered crucial for the project to be managed under control.

Summary

Intended to be more than just a framework forthe creation of software development packages in increments, DSDM is a full life-cycle approach beyond software development projects.

It uses the MoSCoW method of prioritization which are the Must have (M), Should have (S), Could have (C), and Won’t have (W), having an integral part with the project’s success.

Through DSDM, teams get to focus on business needs, make projects deliver on time, successfully collaborate for better performance, can already agree on the desired quality of the project products even at the earliest part of the development, build incrementally from strong foundation, develop iteratively, communicate continuously and clearly,and demonstrate control over the project.

 

If you enjoy reading about software development methodologies, let’s take a look at other blog posts.

Software Development Methodology – Extreme Programming

Software Development Methodology – Scrum

Software Development Methodology – Kanban

Software Development Methodology – Lean

Software Development Methodology – Feature Driven Development (FDD)

Software Development Methodology – Crystal

By Tuan Nguyen

software development methodology

Software Development Methodology – Lean

The concept of reducing waste and adding customer defined value to products and services, Lean development seeks to make small, incremental changes in process to improve speed, efficiency, and quality.

Founded in two pillars namely respect for people and continuous, Lean is described as a mindset and not a set of tools in Japan.

According to Dr. Shigeo Shingo, a Toyota engineer and expert on the methodology, “Lean is a never-ending elimination of waste; it is committed to total customer satisfaction, total commitment to quality and total employee involvement…”

Taiichi Ohno, an industrial engineer at Toyota, first developed the Lean methodology in the 1950s, which was then known as the Toyota Production System.

pushing cube vs pushing globe

Source: SOLABS

Tl; dr;

Lean is a concept methodology that aims in optimizing efficiency and minimizing waste in the development of software. It focuses on the idea that less is more, streamlining in every part of the software development lifecycle.

Software development is a natural application of Lean methodology because it follows a defined process, with defined conditions of acceptance, and results in delivering tangible value.

What is Lean Methodology?

The Lean approach is all about optimizing processes and eliminating waste. With these, it can help operations cut costs while still delivering same high-quality product that customers want and are willing to pay for.

It is done by evaluating the process thoroughly to determine that development teams are doing right and remove or adapt all steps that may possibly generate wastes. This so called “waste” is also known as muda and directly encompasses anything that doesn’t add value to the end product.

Thus, Lean is an improvement and problem solving method in which it strives to reduce or eliminate activities that don’t add value to the customer.

As what management guru Peter Drucker has said, “There’s nothing so useless as doing efficiently that which should not be done at all.”

lean methodology lifecycle

Source: Rotating Solutions, Inc.

Lean Principles

Lean is based on five (5) principles, aiming to help companies change the way they do business for the better:

Value.

Specifying value is always defined from the standpoint of the end customer’s needs for a specific product. It is through understanding what the customer is willing to pay for.

This principle is categorized into three (3) ways:

  • Non-value add activity (waste);
  • Value add activity; and
  • Business value add activity.

Value Stream Mapping.

Mapping all the steps, process, or sequence in the value stream for each product family, at the same time, eliminating those steps that do not create value.

Flow.

Making the product and information requests flow smoothly through the business without disruptions or delays.

Pull.

As the flow is smoothly introduced, let customers pull value from the next upstream activity. This works by replacing only material that is used and eliminating excessive inventory for quick response to customer requirements.

Strive for Perfection.

State of perfection is achieved through frequently seeking to eliminate waste and improving the value provided to customers.

The Eight (8) Types of Wastes

It is possible that companies have hidden wastes that are driving costs of their products and services. With Lean, it helps them identify the eight (8) types of waste:

  • Motion. Unnecessary motion of personnel, equipment, or information within a workstation or motion in a job task that takes too much time to complete.
  • Transportation. Transporting of non-required items or information from one location to another.
  • Waiting. Waiting for parts, tools, supplies, or information, e.g., an absence of flow, or previous back logs.
  • Overproduction. Producing too much than what is required to meet the current demand.
  • Defects. Any repairs or alterations to the product after it’s been made.
  • Inventory. Refers to any type of supplies and materials that are kept or not being processed due to line of imbalance or overproduction.
  • Unrecognized talent. Failure to utilize the knowledge and skills of the employees.
  • Extra processing. Over-processing of information or doing any activity that is not adding value or required to produce products and services.

5S: Tools for Reducing Waste

The 5S System in the Lean Development Methodology represents the Japanese words that describe the steps of a workplace organization process.

  • Seiri (Sort). It is separating what’s essential from the nonessential items, and getting rid of things that aren’t needed.
  • Seiton (Straighten). The practice of proper storing of the essential materials, so the right item/material can be easily picked at the right moment as needed without waste of time.
  • Seiso (Shine). Cleaning the workspace without garbage, dirt, etc.so problems can be more easily identified.
  • Seiketsu (Standardize). Setting up standards to maintain and make 5S a habit.
  • Shitsuke (Sustain). Implementation of behaviors and habits to maintain discipline in the workplace over the long term.

What do the team get for following the principle?

With the implementation of Lean development methodology in companies they can now attain:

  • Great productivity;
  • Smooth operations;
  • Greater flexibility and responsiveness when it comes to customer demands;
  • Eliminate defects;
  • Improved product quality;
  • Reduced lead times and able to meet demands;
  • Increased customer satisfaction;
  • Empowered employees; and
  • Safe working environment.

Summary

The act of reducing waste and adding customer value, Lean defines long-term company survival and eliminating driving costs that greatly affects its products and services.

With its implementation, companies can achieve greater productivity, smoother operations, greater flexibility and responsiveness, eliminate defects, improve product quality, increased customer satisfaction, empowered employees, and a much safer workplace.

 

If you enjoy reading about software development methodologies, let’s take a look at other blog posts.

Software Development Methodology – Extreme Programming

Software Development Methodology – Scrum

Software Development Methodology – Kanban

Software Development Methodology – Dynamic Systems Development Method (DSDM)

Software Development Methodology – Feature Driven Development (FDD)

Software Development Methodology – Crystal

By Tuan Nguyen

software development methodology

Software Development Methodology – Kanban

A popular workflow management method, Kanban (roughly translated as “card you can see”) is an Agile framework designed to manage the creation of products, highlighting continual delivery without overburdening the development team. It helps the team to harness the power of visual information by using sticky notes on a whiteboard to create a “picture” of their work.

The concept was first developed by Taiichi Ohno (Industrial Engineer and Businessman) in the late 1940s for Toyota automotive in Japan to overhaul its assembly and production system. It aims to optimally control and manage work and inventory at every stage of production.

One key reason it was developed because Toyota had inadequate productivity and efficiency compared to its American automotive rivals.

In utilizing the Kanban system, the Japanese automotive company achieved a flexible and more efficient “just in time” production control system with an increase in productivity, dramatically reducing cost-intensive inventory of raw materials, semi-finished materials, and finished products.

NOTE: Not to confuse with Lean, Kanban is aimed not at eliminating wastes but at optimizing the manufacturing process by regulating the supply of raw material.

kanban methodology

Source:  Zege Technologies

Tl; dr;

Kanban methodology emphasizes on balancing tasks demands with available capacity. It streamlines visual representation of workflow through a Kanban board that is known to be much effective than the simplest to-do list, for the reason that the human brain can process visuals better than any other data.

A basic Kanban board uses three columns known as lanes: To Do, Doing, and Done, but teams may decide to add more columns. This board can usually be realized through utilizing colored sticky notes on a whiteboard. The color-coded sticky notes indicate priorities, assignees, or any other information vital to the project.

Kanban: An Introduction

kanban board

Source: KanbanFlow

Kanban usually requires real-time communication of capacity and full transparency of work through a Kanban board to allow team members to monitor and see the state of every piece of work at every stage.

It works on three (3) basic principles:

  • Visualization. A Kanban board serves as an informative board to see descriptive items in series of available tasks and show their relationship to each other.
  • Limited amount of Work-In-Progress(WIP). This principle helps balancing the flow-based approach so teams only commit to a new task once an existing task is completed.
  • Flow. Being Kanban’s core concept, this principle simply means that when something is finished, the next highest thing from the backlog is then pulled into play. Promoting collaboration continuously, Kanban encourages active, ongoing learning and improving by defining the best possible team workflow.

Kanban’s benefits

  • Visibility. In terms of real-time status information, Kanban empowers project managers to effortlessly see everything they need to manage initiatives in one place in real-time rather than synthesizing information across countless sources.
  • More added value. With the visibility that the Kanban board provides, it lets project managers to keep the work moving, and immediately see if certain issues occur. Thus, providing more value to their teams, by removing any blockage in the flow of progress in initiatives you manage.
  • Continuous improvement. Through this concept, teams get to have the concept of continuous improvement because through shared visibility in how to get things done, project managers and their teams to work together to improve the process in delivering maximum value.
  • Improved communication among stakeholders. Through shared visibility, Kanban allows project managers to better communicate with key stakeholders by providing a bird’s-eye view of strategic initiatives to executives and other stakeholders for a clearer understanding. With a clear understanding of how work flows through the organization, stakeholders can make strategically smart decisions in behalf of the company.

Summary

Kanban is a popular workflow management method designed to give teams the visibility to deliver work on time, on budget, and on value. The concept is realized through using sticky notes on a whiteboard to create a “picture” of their work.

It works on three principles, namely Visualization, limited amount of Work-In-Progress (WIP), and Flow.

The management method is beneficial because it is visible, gives teams an added value, encourages continuous improvement, and better communication with stakeholders.

 

If you enjoy reading about software development methodologies, let’s take a look at other blog posts.

Software Development Methodology – Extreme Programming

Software Development Methodology – Scrum

Software Development Methodology – Lean

Software Development Methodology – Dynamic Systems Development Method (DSDM)

Software Development Methodology – Feature Driven Development (FDD)

Software Development Methodology – Crystal

By Tuan Nguyen