Open sign

Technology review – Open Source Licenses

Open-source licenses are licenses that allow software to be freely used, modified, and shared under defined terms and conditions.

These licenses allow users or organizations to adjust the program’s functionality to perform for their specific needs.

Though this term originated in the context of software development to designate a specific approach in creating computer programs, its meaning however evolved into something moralistic, ethical and sustainable.

Today, when we say “open source” we are following the open source way.

Open source projects, products, or initiatives embrace and celebrate principles of:

  • Open exchange;
  • Collaborative participation;
  • Rapid prototyping;
  • Transparency;
  • Meritocracy; and
  • Community-oriented development.

Tl; dr;

Open-source licenses are legal and binding contracts between the author and the user of a software component. It declares that the software can be used in commercial applications under specified conditions.

Open Source Licenses

The Open Source Compatibility Chart

Source: Duke Computer Science

Open source licenses are vital for setting out terms and conditions on which software may be used, modified, or distributed and by whom. They are also used to facilitate access to software as well as restrict it.

A license agreement is implemented so that potential users know which limitations owners may want to enforce, at the same time, owners will have strong background with their legal claims and having great control as to how their work will be used.

Originally, the open source movement started in 1983 when Richard Stallman, a programmer at the MIT Artificial Intelligence Laboratory at the time, said he would create a free alternative to the Unix operating system, then owned by AT&T; Stallman dubbed his alternative GNU, a recursive acronym for “GNU’s Not Unix.”

With Stallman’s vision, the idea of “free” software:

  • Ensures that users were free to use software as they saw fit;
  • Free to study its source code;
  • Free to modify it for their own purposes; and
  • Free to share it with others.

Then on, other programmers followed Stallman’s example. One of the most important was Linus Torvalds, the Finnish programmer who created the Linux operating system in 1991.

Open-source license Major Types

Open-source licenses come in two (2) major types:

Copyleft License.

A free software license that requires copyright authors to permit some of their work to be reproduced.

With copyright law, authors have complete control over their materials.

But with the power of copyleft license, users are permitted to copy and distribute copyrighted materials.

However, authors still have the verdict in who uses the materials based on their intended use.

This license does not require any source code distribution. That’s why users have rights similar to those normally only granted to the copyright authors, including activities such as distribution and copying.

Popular examples include GNU General Public License (GNU GPL), Common Development and Distribution License (CDDL), Mozilla Public License (MPL), Affero General Public License(AGPL) and Eclipse Public License (EPL).

*Note: Not all copyleft licenses are compatible.

To comply with this type of license, simply place your code in the public domain, uncopyrighted.

As a result, GPL-licensed code is not typically suitable for inclusion in proprietary software, although there are some like LGPL (Lesser GPL) licensed code which may be applicable.

Permissive License.

A free software license for a copyrighted code that can be freely (the freedom we think it is) used, modified and redistributed. Permissive-licensed code can be included in proprietary, derivative software.

Popular examples include MIT License, BSD Licenses, Apple Public Source License, Apache License and Microsoft Public License (Ms-PL).

*Note: Though permissive license is not as restrictive as copyleft license, it still requires compliance with a number of obligations (i.e. inclusion of the original license or copyright) before it can be redistributed.

Open-source license Minor Types

While, open source license has two (2) minor types:

Public Domain.

Code that can be used by the public without restriction or copyright. This type of license is intended to be truly free of costs and commonly has very few (if any) obligations with which you need to comply.

Popular examples include Creative Commons (CC-0).

*Note: Though public domain code should be the simplest license to deal with, PLEASE be aware that not all jurisdictions have a common definition of what public domain means. You should ALWAYS check local regulations.

Source Available.

An emerging type of license applied to code that cannot be deployed “as a service,” source available license is being defined in response to cloud providers.

Popular examples include Redis’ Source Available License (RSAL), MongoDB’s Server Side Public License (SSPL), the Cockroach Community License (CCL), or licenses that have had the Commons Clause added.


There are numerous open source licensing agreements a program or file may follow. So, better refer to the appropriate documentation to see what the original developer allows and prohibits.

By Tuan Nguyen

window vs linux

Discussion – Why Linux servers are more popular than Windows servers?

A server is a computer program or a device that provides a service to another computer program and its user, also known as the client. Servers are often dedicated to carry out hardly any other tasks apart from their server tasks.

Divided into categories, there are kinds of servers that include file servers, database servers, print servers, and network servers.

Theoretically, whenever computers share resources with client machines they are considered servers.

Speaking of servers, there are two main web-hosting services on the market, Linux and Microsoft Windows.

Linux is an open source software server that is why it is cheaper and easier to use than a Windows server. Windows server (a Microsoft product) is often subject to charges so one must pay for the operating system and a periodic use license.

But for many companies, profit is worth the price with Windows server, because it generally offers more range and more support than Linux servers.

So how did we conclude that Linux is far more popular than Window servers?

For more evidence, let us see the statistics.

For Web Servers

According to W3Cook, Linux powers the servers that run 96.5% of the top one million domains in the world (as ranked by Alexa).

The statement was even further discussed by W3Techs claiming that Linux powers around 70% of the top 10 million Alexa domains. Windows controls the remaining 30%.

For Supercomputers

Linux utterly dominates the list of the top 500 most powerful supercomputers in the world.

Tl; dr;

Compare to Windows, Linux maintains a noticeable lead.

Linux isn’t just common on laptops, mobiles, and servers, because governments from all over the world have been using it in their military operations and educational systems.

It is far more secure and is the only OS used by TOP500 supercomputers. Further, it is designed to handle demanding business requirements like system administration, networking and database management.

Linux: Fast with High Security

linux mascot

Source: Wccftech

Linux is a family of free, open source software operating systems built around the Linux kernel.

This was invented by Linus Torvalds in 1991, who was then a student at the University of Helsinki in Finland with the commencement of his personal project to create a new free operating system kernel.

According to studies by the Linux Foundation and SUSE, Linux is fast-becoming the operating system (OS) of choice for many organizations that operate servers that host cloud and big data applications.

Different industries including finance, healthcare, military, government, internet, etc. have been using it to manage operations. Also, Startups and smaller businesses often choose the open-source OS for their operating servers because much of the distributions are freely available.

Something that makes Linux to stand out

It is a free and open source.

The first truly free Unix-like operating system, Linux is completely an open source project. There is no commercial vendor trying to lock users into certain products or protocols. Thus, businesses are free to mix and match and choose what works best for their needs so getting a genuine copy of a Linux distro (i.e. Ubuntu, Fedora) is absolutely free.

No wonder governments, organizations and major companies like Google, Amazon, and Netflix, are using the open source operating system in their own production systems.

It is more secure.

Linux based operating systems are secure and suitable for servers. It implements a variety of security mechanisms to secure files and services from numerous attacks and abuses.It highly restricts influence from external sources (i.e. users, programs or systems) that can possibly destabilize a server.

Integrated control features include centralized identity management and Security-Enhanced Linux (SELinux), mandatory access controls (MAC) on a foundation that is Common Criteria and FIPS 140-2-certified.

It has better control.

The open source project allows a certain business to employ multiple vendors – effectively avoiding what is called the vendor lock in. Plus, system admins have powerful tools at their disposal, such as how to use systemctl.

It is more stable and reliable.

Unix-based, Linux systems are widely known for their stability and reliability. Many Linux servers on the Internet have been running for years without failure or even being restarted.

Linux systems are stable due to lots of determinants which include management of system and programs’ configurations, process management, security implementation etc.

With Linux, you can modify a system or program configuration file and effect the changes without necessarily rebooting the server, which is not the case with Windows:

  • When a software is installed you must REBOOT
  • When you recently uninstall software you need to REBOOT
  • If you just installed a Windows update, REBOOT
  • When the system seems to slow down, REBOOT then.

Linux helps your system run smooth for a longer period of time.

In case a process goes unstable, you can send it an appropriate signal using commands such as kill, pkill and killall.

It is perfect for businesses.

For enterprises,Linux let them receive vulnerability security updates continuously from the upstream community itself or a specific OS vendor, which remedies and delivers all critical issues by next business day, if possible, to minimize business impact.

In addition, it automates regulatory compliance and security configuration remediation across business system and within containers with image scanning like OpenSCAP that checks, remediates against vulnerabilities and configuration security baselines.

It is more private.

Privacy concerning Microsoft’s Windows 10 does not look convincing and has already received an enormous criticisms of how it collects data, having you to put off all spying modules in its privacy setting.

On the other hand, Linux distributions do not collect much data or completely none. In addition, you don’t need to have additional tools to protect your privacy.

It has better community support.

Linux forums provides better solutions because you let others to solve your problem. Just post a query on some of the Linux-related forum threads on the web and expect tons of replies with a detailed solution. So no need to hire an expert and lose a fortune.


Even though Linux’s edge has been already discussed above, still you must need a deeper understanding between Linux systems and Windows.

Try to cite each pro’s and con’s, balance everything in different perspectives.

Also, you can work across platforms with Windows and Linux, to better experience each of their capabilities.

By Tuan Nguyen

React logo

Technology review – React 16.9

React 16.9 landed in August 08 2019. Bringing with it numerous bug fixes, as well as some new features, including  <React.Profiler> , and a testing utility act()

Tl; dr;

React 16.9 does NOT contain any breaking changes, so we can upgrade from 16.8 safely. It contains a programmatic profiler so developers can measure the performance of the components in specific, and the whole application in general. Introducing a new-ish testing function act(), helping to simulate exactly how React works in a real browser.


Unsafe lifecycles

Major deprecation warnings are introduced in this update. Firstly, deprecated lifecycle methods are now renamed to UNSAFE_…

class MyComponent extends React.Component {
  // previously componentWillMount()
  UNSAFE_componentWillMount() { ... }
  // previously componentWillReceiveProps
  UNSAFE_componentWillReceiveProps() { ... }
  // previously componentWillUpdate()
  UNSAFE_componentWillUpdate() { ... }

The old lifecycle methods will be removed in the future updates. For 16.9, they only throw warnings if the code contains these lifecycle methods.

To help with migration, Facebook also introduces a library called react-codemod to help changing these names across your codebase. More details in the official announcement from React team.

Javascript: URLs

Another deprecation is the usage of javascript: URLs

We have been always able to copy and paste the following code into the url bar and see your current browser name. Or you can change it for it to do all sort of things, including exploiting cyber-security vulnerabilities.

javascript: alert(`Hello champion. Your browser is ${window.navigator.platform}`);

In React 16.9, the usage of this syntax will throw a warning and React team plan to have it throwing an error in the future. More details here.

“Factory” components

Previously, we can create a factory component function to create dynamic components based on our needs. This is before having Babel to compile React classes.

const factoryComponent = (type) => {
  return {
    render() {
      return <div>{type}</div>;

Now, with such usage, React will throw a warning to notify developers to avoid this pattern. We can alter the code to properly return a function/component class instead.

// factory for functional components
const factoryComponent = (type) => {
  return () => <div>{type}</div>;
// factory for component class
const factoryComponent = (type) => {
  return class extends React.Component {
    render() {
      return <div>{type}</div>;

New features

Async act()

Finally we can use act() with asynchronous function. Prior to 16.9, it only accepts a normal function.

const asyncFunc = async () => { ... };
// before
it('should do something', (done) => {
  act(() => {
      .then(() => done())
      .catch(err => done(err));
// after
it('should do something', () => {
  act(() => asyncFunc());


Basically it’s a wrapper component to allow a callback function to be executed everytime the underlay components are called.

class App extends React.PureComponent {
  constructor(props) {
    this.state = {
      counter: 1,
    setInterval(() => this.setState({ counter: this.state.counter + 1 }), 1000);
  render() {
    const onRender = () => {
    return (
      <React.Profiler id="application" onRender={onRender}>
        <div className="App">

And we have this in our console.

For more information on Profiler, React team provide a deep down documents here.


Every React version brings us joy to test and tinker with the new features. React 16.9 is no different. I had fun working with the #bleedingedge. Hopefully I can integrate some of these new features into my work soon.

By Tuan Nguyen

apache vs nginx

Technology review – Apache vs Nginx

The two most popular open source web servers worldwide, Apache and Nginx are responsible for serving over 50% of traffic on the internet.

Even though these software are quite the same, in terms of handling requests, analyzing them, and then sending back the corresponding documents to be viewed in a visitor’s browser, they should not be regarded entirely interchangeable.

Each excels in its own way in managing diverse workloads and working with other software to provide a complete web stack.Also, it is important to understand the situations wherein you might need a thorough evaluation in choosing your web server.

Let us dig deep into the differences between Apache and Nginx and how each server stacks up in several areas.

Tl; dr;

Apache and Nginx are deemed equally close to the pinnacle of the popularity chart as web servers. Both are used by large Fortune 500 companies around the globe and are responsible behind the success of more than half of the websites currently on the Internet.

Most oftenly, Nginx is compared to Apache due to its similar open-source philosophy.

In general, their main difference lies in their design structure.

Apache uses a process-driven approach creating a new thread for each request while, Nginx uses an event-driven architecture handling multiple requests within one thread.


apache logo

Logo: Apache Software Foundation // Composition: ZDNet

Created by Robert McCool in 1995, the Apache HTTP Server has always been the most popular server on the internet since 1996 and has reached the 100-million website milestone in 2009. It powers up to 52% of all websites worldwide.

This web server is commonly picked by administrators for its flexibility, power, and widespread support, having extensive dynamic loadable module system and can process a large number of interpreted languages without the need of connecting out to a separate software.

Due to its popularity, Apache benefits from great documentation and integrated support from other software projects.It is commonly used in the LAMP (Linux, Apache, MySQL, PHP) web stacks found on shared hosting servers and has a rich set of features that can be enabled by installing one of the 60 official modules.


nginx logo

Source: NGINX

An answer to the so-called C10K Challenge – “How do you design a web server which can handle ten thousand concurrent connections?,” Nginx (pronounced “EN-jin-EKS”, or “Engine X”) was developed in 2002 by Russian developer Igor Sysoev but was then publicly released in 2004.

As Owen Garrett, Nginx’ project manager said, “Nginx was written specifically to address the performance limitations of Apache web servers.”

This simply explains that compared to earlier servers such as Apache, Nginx is innovated in using an asynchronous, event-driven architecture and uses the LEMP stack (Linux, “En-juhn-ex,” MySQL, PHP).

In March 2019, NginxInc was acquired by F5 Networks for $670 million and at that point, as Techcrunch stated, Nginx server was powering “375 million websites with some 1,500 paying customers”.

Battle Ground: Apache vs. Nginx

Growing popularity. More than 65% of websites were based on Apache up until 2012 and is greatly regarded as among the first software that pioneered the growth of the World Wide Web (WWW) – its popularity is gained through this historical legacy.

But that dramatically changed.

The popularity gap between Apache and Nginx is closing very fast. According to, as of January 14, 2019, Apache (44.4%) is just slightly ahead of Nginx (40.9%) in terms of websites using their servers.

server popularity ranking

Fastness. A good web server should run at great speed and can respond to connections and traffic anywhere easily.

In terms of server speeds, a documented experiment was made on two popular travel websites based on Apache ( and Nginx (

Measured via an online tool called Bitcatcha, the comparisons were made for multiple servers and measured against Google’s benchmark of 200 ms.

The results were astounding. (Ngnix) was rated “exceptionally quick,” whereas (Apache) was rated “above average and could be improved.”

Security level. In terms of security, you can guarantee great protection from DDoS attacks, malware, and phishing because both take security measures seriously on their websites. The two release security reports and advisories periodically.

Concurrency features.Another test was conducted and again compared on (Nginx) with (Apache) based on stress tests at

As for 25 virtual users, the Nginx website was able to record 200 requests per second, which is 2.5 times higher than Apache’s 80 requests per second.

Therefore, for a dedicated high-traffic website, Nginx is definitely a good choice.

Versatility. A good web server should be flexible enough to allow customizations. Apache does it quite well using .htaccess tools that allows decentralization of administrator duties, which Nginx does not support. That’s why Apache is far more popular with shared hosting providers.


Overall, we can all say that Nginx narrowly wins the match 2-1, but these comparison on technical parameters can’t definitely suggest the full picture. Thus, both web servers are useful in their own ways.

Apache has .htaccess file that is more convenient, with lots of dynamic modules. Ngnix is popular for VPS hosting, dedicated hosting, or cluster containers.

By Tuan Nguyen

xml heart tag

Discussion – Why JavaScript is loved by developers?

A flexible and powerful programming language, JavaScript has been consistently implemented by various mobile sites, games, and web applications. Now, it has become a core component of web technology along with HTML and CSS.

According to numerous sources, including the Stack Overflow’s Annual Survey of 2018, JavaScript is the most commonly used programming language.

Stated in this same 2018 developer survey, “For the sixth year in a row, JavaScript is the most commonly used programming language.” Also, having to mention most commonly used libraries, frameworks, and tools, JavaScript-based technologies like Node.js, AngularJS, React top the list.

popular technology list

Source: Stack Overflow

Another survey revealed that JavaScript is used by 88% of all websites.

With all the success, it was never meant to become the cornerstone of modern web development. In fact during the 90s, the language was just created in less than 2 weeks, with a very different purpose in mind – running on both client and server.

Back then, it wasn’t a complete success though.

It took years to be taken seriously as a backend language, but it had rapidly thrived on the frontend, eventually becoming the standard programming language of the web.

Tl; dr;

While HTML is for structure and CSS is for style, JavaScript provides interactivity to web pages in the browser.

The name of JavaScript was derived in an attempt to ride the wave of Java’s popularity and speed up adoption.

You will not find much similarities between both languages today.

With the creation of JavaScript, it is possible to satisfy different audiences.

The first are the component writers and enterprise-level professionals with Java, while the second one are scripters and designers with JavaScript.

This second group, whom we could also call in modern web development terms, frontend developers.

Why do developers love JavaScript?

javascript features

Source: TutorialAndExample

A scripting language that’s inserted directly in the HTML of a page, JavaScript is the only programming language of its kind that can be understood by web browsers. We will have to wait for WebAssembly to be more mature to be on par with Javascript.

With JavaScript, browsers can read, interpret and then run the program. Thus, creating powerful client-side experiences.

Here are few things you see every time you spend at least 2 minutes on a web browser that are the returned result when searching for the programming language:

  • Autocomplete;
  • Loading new content or data unto the page without reloading the page;
  • Rollover effects and dropdown menus;
  • Animating page elements (i.e. fading, resizing or relocating);
  • Playing audio and video; and
  • Validating input from forms.

Knowing that web servers run on different languages such as Python, PHP, Ruby, Java or .NET, JavaScript is compatible with other languages.

The importance of JavaScript

JavaScript has wide-ranging importance, that’s why it is popular among web developers:

  • JavaScript is supported by multiple web browsers including Google Chrome, Internet Explorer, Firefox, Safari, and Opera etc.
  • It has lots of frameworks and libraries, making it easy for web developer to build large JavaScript-based web applications.
  • With just opening a notepad, you can write your JavaScript code – easy to write, without using any specific tool.
  • With Google implementing many optimization techniques to increase the loading speed of the mobile web pages, it requires developers to use JavaScript for optimizing websites for mobile devices for Accelerated Mobile Pages (AMP)
  • An interpreted programming language, JavaScript still simplifies development of complex web applications, allowing developers to simplify the application’s composition.
  • As most developers do responsive web design to make a website accessible and work well across multiple browsers and devices such as smartphones and desktops, it can be used in optimizing web pages for mobile devices.
  • Relatively fast for the end user, visitors don’t need to fill out an entire form and submit it in order to be told they made a typo in the first field or fill the complete form again. JavaScript provides immediate feedback when they made a mistake.
  • JavaScript language is simple for developers to learn, having its syntax similar to English.
  • It has third party add-ons, helping developers write snippets that can be utilized on the necessary web pages.
  • With JavaScript, code is executed on the user’s processor instead of the web server, saving developers bandwidth and reducing extra load of the server.


JavaScript a mainstream programming language and most loved by various types of developers, helping them build large scale web applications easy and quick.

It also enhances speed, performance, functionality, usability, and features of the application without any hassle, delivering optimal user experience across various devices, browsers, and operating systems.

Additionally, JavaScript has libraries, frameworks, and tools as per the requirements of the projects.

By Tuan Nguyen

illustration of binary code

Discussion – Should we still use low level programming languages?

Abbreviated as LLL, Low Level Language is a type of programming language that contains basic instruction recognized by a computer.

It is often cryptic and not human-readable, compare to high-level languages used by software developers.

The word “low” refers to the small or nonexistent amount of abstraction between the language and machine language, making LLL the type of language described as being “close to the hardware” sometimes.

Since instructions written in low level languages are machine dependent, a low-level programming language interacts directly with the registers and memory.

The programs developed using low level languages are machine dependent and are not portable.

Tl; dr;

Used to write programs relating to the specific architecture and hardware of a particular type of computer, low level languages (LLL) are closer to the native language of a computer (binary, or ternary), making them harder for programmers to understand.

They are directly designed in operating and handling the entire hardware and instructions set architecture of a computer and are more appropriate in developing new operating systems, device drivers, applications, or writing firmware codes for micro-controllers.

What are low level programming languages?

level of programming languages


Low-level programming languages’prime function is to operate, manage, and manipulate computing hardware and components. It is much less human readable, yet easier and faster for computers to understand.

The programs and applications written in low-level programming are executable on the computing hardware without any interpretation or translation directly.

Though it is harder for programmers to write, debug, and maintain low-level programs, it is much efficient since it runs faster as it is close to the machine language and takes up less footprint memory in your computer.

It has two (2) common types:

Machine language.

The closest language to the hardware, machine language consists set of instructions directly executed by the computer.

These set of instructions are a sequence of binary bits, each of which having to perform a very specific and small task.

For example:

SUB AX, BX   ;00001011 00000001 00100010

is an instruction set to subtract values of two registers AX and BX.

Generally, instructions written in machine language are machine dependent and varies from computer to computer.

From the early days of programming developing programs using this language is a tedious job, having to remember sequence of binaries for different computer architectures. Today, it is not much in practice. 

Assembly language.

An improvement over machine language, assembly language is similar to machine language as it also interacts directly with the hardware. The only difference is that instead of using raw binary sequence to represent an instruction set, this language uses mnemonics.

Mnemonics are short abbreviated English words specifying a computer instruction. Each instruction in binary has a specific mnemonic. They are architecture-dependent and there is a list of separate mnemonics for various computer architectures.

Examples of mnemonics include – ADD, MOV, SUB etc. – easy to remember than binary sequences.

The language uses a special program called assembler. An assembler translates mnemonics to a specific machine code.

Its advantages are the following:

  • Making use of special hardware or special machine-dependent instructions (e.g. on the specific chip);
  • Requires less memory with translated program;
  • Writes code that can be executed faster;
  • Has full control over the code; and
  • Works directly on memory locations.
Machine Language1010001100011001
Assembly LanguageADDR3,R1,R9

Source: Abstraction Level Taxonomy of Programming Language Frameworks by Dr. BrijenderKahanwal

Advantages of Low Level Programing Languages

  • Programs are fast and memory efficient when developed using low level languages.
  • Programmers can better utilize processor and memory.
  • There is no need of any compiler or interpreters in translating the source to machine code. Therefore, cutting off the compilation and interpretation time.
  • Provides direct manipulation of computer registers and storage.
  • Can directly communicate with hardware devices.

Disadvantages of Low Level Programing Languages

  • Programs are machine dependent and are not portable when developed using low level languages.
  • Difficult to develop, debug and maintain.
  • Programs developed by low level programs are more prone to error.
  • Low level programming usually results in poor programming productivity.
  • Programmers must have additional knowledge of the computer architecture of a particular machine for programming in low level language.

Should we still use this type of programming language?

low level language vs high level language

Source: Medium

Summing it up, it depends upon what do you want to program.

Low-level languages are closer to the native language of a computer (binary) has important benefits because they require little interpretation by the computer. Thus, they generally run very fast, giving programmers a total control over data storage, memory, and retrieval.

However its counterpart, the high level languages, are written in a form close to human language. They are intuitively easier to grasp allowing programmers to write code much more efficiently. Also, they considered to be safer as they have more safeguards in place, keeping coders from issuing poorly written commands that could cause damage. The downside is, they don’t give much control over low-level processes.

If you’re planning to write on operating systems, kernels, or anything that needs to run at the absolute highest speed possible, low level language might be the best choice.

But, numerous modern apps are written in higher-level or even domain-specific languages.

So you might consider learning both language instead.

By Tuan Nguyen

tug of war

Discussion – GitHub vs Bitbucket

If you want a large development team to collaborate and work on a certain project, you need to choose the right source platform to upload your code.

You can pick any repository hosting platform but unfortunately not every repository host makes your developers more productive in creating products as you’ve planned.

GitHub and Bitbucket are the two most popular repository hosts that provide its customers both public and private repositories.

These two have grown strong communities and user bases over the years.

Tl; dr;

GitHub and Bitbucket are the two best-known version control systems on the DevOps market.

The two offers features appealing to everyone from individual developers, to small teams and right through enterprise customers.

Their most basic and fundamental difference is that GitHub is more focused around public code having a huge open-source community, while Bitbucket is for private having mostly enterprise and business users.

What is GitHub?

github logo

Source: thenextweb

GitHub is a for-profit company that offers a cloud-based Git repository hosting service, helping developers store and manage their code, as well as track and control changes to their code.

It is the most popular version-control system with some 57 million code repositories on file. Also, it is loved by the open source community as public repositories are free.

It only supports Git (not Mercurial or SVN), written in Ruby and Erlang and is available for Windows, Mac and Android.

Its key features include:

  • Social coding;
  • Collaboration;
  • Integrated issue and bug tracking;
  • Graphical representation of branches;
  • Code review;
  • Code hosting;
  • Team management;
  • Project management;
  • Propose changes;
  • Protect branches;
  • Tracking and assigning tasks;
  • @mentions;
  • Conversations;
  • Milestones;
  • Assignees;
  • Integrations;
  • Documentation;
  • Set community guidelines;
  • Built-in review tools; and
  • Team and user permissions.

Advantages of using GitHub

  • An integrated issue tracker, helping teams to track issues within their project in real-time;
  • Contains milestones and labeling features to assist users in tracking information changes;
  • Offers branch comparison views reviewing the state of the user’s repository across commits, tags, and timescales;
  • Supports more than 200 different programming languages;
  • Allows users to host and publish theirown codes on the GitHub platform across the SSL, SSH or https environment; and
  • Has the ability to highlight syntax (its edge over other platform offers).

What is Bitbucket?

bitbucket logo

Source: stackshare

An Atlassian product (the makers of Trello and other apps), Bitbucket is very well-known for its full integration with other products in the Atlassian family such as Jira, Fisheye and Bamboo. It enables you to have a slick and clean interface the moment you log-in.

The product tool is primarily tailored in helping enterprise developers. It is written in Python and uses the Django web framework. It supports Git and Mercurial VCS (but not SVN) and comes with SOC 2 Type II security compliance.

Bitbucket is available for Mac and Windows and Android via an app.

Its key features include:

  • Workflow control
  • Access control to restrict access to your source code
  • JIRA software integration
  • Snippets
  • Collaborative projects
  • Smart mirroring (beneficial for enterprise level)
  • Code clustering
  • Mercurial repository hosting
  • Latest and Updated APIs
  • Full support for large files
  • Supports external authentication with Github, Facebook, Google and Twitter.

Advantages of using Bitbucket

  • Allows users to construct an internal issue tracker within their repository so that they can track down bugs in real-time;
  • Enables its users to access control features, regulating access permissions for different people involved in the project;
  • Incorporates two deployment models, allowing users to either placing the code in a cloud environment or launching a separate in-house server;
  • Offers both Git VCS and Mercurial;
  • Easy to navigate and search;
  • Helps users to migrate their code from old repositories to new repositories of BitBucket; and
  • Though not offering the highlight feature, Bitbucket still add comments and launch threaded discussions.

GitHub vs Bitbucket: Which one is better?

Summing up, Bitbucket is easy to use and a bit forgiving if you are just new to git, as you learn the workflow.

If you are interested with open-source development, GitHub is the major platform in open-source and public code. While, Bitbucket specializes in business clients.

For teams with only a few number of people Bitbucket can be your friend.

In terms of package, GitHub falls a bit on the pricier end of the spectrum in comparison to BitBucket and also promises a bigger community.

Weighing across other attributes and features, GitHub takes the top shelf and irrefutably the best version control repository in the market.

Whatever development tools you use, may it be Bitbucket or GitHub, anyone can guarantee great service for both of these products.


GitHub and Bitbucket are development tools that caters different services and demographics. GitHub basically has a huge open-source community while, Bitbucket has enterprise and business users mostly.

It is not that you can’t have a private repository on GitHub (yes you can) or you can’t post codes publicly on Bitbucket (again, you can). It is such that, the majority of the users aren’t doing so.

Outside their differences, the two platforms function very similarly.

By Tuan Nguyen

software development methodology

Software Development Methodology – Crystal

Crystal or also known as Crystal Methods are a family of methodologies (the Crystal family) developed by Alistair Cockburn from his study and interviews of teams in 1998.

The word Crystal comes from a gemstone, where in software terms, the faces (representing the techniques, tools, standards and roles) are a different view on the “underlying core” of principles and values.

It primarily focuses on people and their interactions when they work on a certain software development project rather than on processes and tools.

Plus, it also centers on business-criticality and business-priority of the system under development.

Crystal methods are said to be “lightweight methodologies”, promoting early, frequent delivery of working software, adaptability, high user involvement, and the removal of bureaucracy or distractions.

As based straight from Cockburn’s words, “Crystal is a family of human-powered, adaptive, ultra light, ‘stretch-to-fit’ software development methodologies.”


Source: ActiveCollab

Tl; dr;

Crystal is a collection of Agile software development approaches introduced by Alistair Cockburn in 1998. The concept focuses on communication, with special emphasis on interaction, community, people and skills.

Cockburn believes that the people’s skills and talents, as well as the way they communicate has the biggest impact on the project outcome.

He adds that we should view the product development as a game which should stimulate everyone to interact, become creative and produce brilliant ideas. Instead of focusing on questions like “is our model accurate?” we should be looking for answers to the questions like “Is our product meeting the customer’s needs? Or “Do we have our goals aligned as a team?”

The Crystal Methods

crystal lifecycle

Source: A Practical Guide to Seven Agile Methodologies Part 2

One of the most lightweight, adaptable approaches to software development, Crystal methodology is based on two (2) fundamental assumptions:

  • Teams can streamline their processes as their work and become a more optimized team.
  • Projects are unique and dynamic and require specific methods.


Crystal method is actually comprised of a family of agile methodologies in different variants (colors), each of which has unique characteristics driven by several factors such as team size, system criticality, and project priorities. The following are just some of the practically used crystal methodologies in real projects:

  • Crystal Clear – for teams consisting of 6 people.
  • Crystal Yellow – for teams consisting of 10-20 people.
  • Crystal Orange – for teams consisting of 20-40 people.
  • Crystal Red – for teams of up to 80 people.
  • Crystal Maroon – for teams of up to 200 people.

Defining the Crystal Method Characteristics

  • Human-powered. This simply emphasizes that people are the most vital element of Crystal, and all the processes and tools are relative to them. It also means that people are capable of organizing themselves. As the processes develop, they can become more organized and competent.
  • Adaptive. An approach and not a set of prescribed tools and techniques for software development, Crystal is a stretch to methodology. This means that processes and tools are not fixed, but rather adjusted in order to meet the requirements of the team and the project at hand.
  • Ultra-light. Not advocating too much documentations, management, and reporting, Crystal is known as a “lightweight methodology.” It focuses on practicing open communication between team member and transparent workflow between the team and the client.
  • The Crystal Method Practices: Certain practices are crucial for the successful implementation on any project. Involving preciseness, Crystal has a number of practices that include:
  • An iterative and incremental development approach. For the overall refinement and completion of the software, the project is developed in iterations that are generally time boxed; user feedback taken at the end of an iteration is used to plan the next iteration; and new and additional features are added in every subsequent iteration.
  • Active user involvement. With Crystal’s people-centered and transparent nature, users are not only actively involved but also regularly informed regarding the project’s progress.
  • Delivering on commitments. The team strives to secure frequent delivery of client-valued, potentially-shippable functionalities.

7 Properties of Crystal Method

Between all methods in the Crystal family, given below are the seven (7) prevailing common properties:

  • Frequent Delivery. Allows teams to frequently deliver working, tested code to real users. This way, they don’t need to realize that they have invested their energy and time into a product that nobody wants to buy.
  • Reflective improvement. There are always techniques and methods where your team can improve a product, no matter how well or bad it has become.
  • Close or osmotic communication. Enables teams to pick up useful information without even being directly involved in the discussion of the certain matter.
  • Personal safety. In order to build a healthy working atmosphere and genuine team culture, team members should practice an open and honest communication whether presenting a new idea or possible problem, without fear.
  • Focus. Each team member knows exactly what to work on, allowing them to focus their attention and avoiding switching tasks one after another.
  • Easy access to expert users. Allows teams to maintain communication and get regular feedback from real users.
  • Technical environment with automated tests, configuration management, and frequent integration. Very specific tools for software teams emphasizing continuous integration so that errors could be caught in just a matter of minutes.


The Crystal approach considers people as the most important, thus, processes should be modeled to meet the requirements of the team. It has an iterative and incremental development approach, an active user involvement, and delivers on commitments.Also, it is adaptive, without a set of prescribed tools and techniques and doesn’t requires too much documentation, management or reporting.


If you enjoy reading about software development methodologies, let’s take a look at other blog posts.

Software Development Methodology – Extreme Programming

Software Development Methodology – Scrum

Software Development Methodology – Kanban

Software Development Methodology – Lean

Software Development Methodology – Dynamic Systems Development Method (DSDM)

Software Development Methodology – Feature Driven Development (FDD)

By Tuan Nguyen

software development methodology

Software Development Methodology – Feature Driven Development (FDD)

Client-centric, architecture-centric, and pragmatic, Feature Driven Development (FDD) is an agile framework that primarily focus on the feature set that the client values, and is known for short iterations and frequent releases.

FDD was first introduced in 1999 in the book Java Modeling In Color with UML while its first real-world application was on a 15 month, 50-person project for a large Singapore bank in 1997, which was then immediately followed by a second, 18-month long 250-person project.

The term, “feature” emphasized in the FDD context, doesn’t necessarily mean product features, rather, they are more akin to user stories in Scrum. Thus, “completing the login process” might be considered a feature in the FDD methodology.

Not many talk about FDD, though it is often mentioned in passing in agile software development books and forums.

people plan a project

Source: DevPro Journal

Tl; dr;

Feature Driven Development is a customer-centric software development methodology built largely around discrete “feature” projects.

With this concept, developers can plan and manage each stage of project development to keep prioritizing client requests, responding to requests in time and making clients satisfied.

This is done through mapping out what features developers are capable of creating, breaking complex requests into a series of smaller feature sets and then creating a plan for how to complete each goal over time.

The Feature Driven Development Methodology

feature driven development lifecycle

Source: New Line Technologies

Suitable forprojects with large development teams, follow pre-defined standards and require quick releases, FDD focuses on short iterations with each of which serves to workout certain part of the system’s functionality.

FDD concept’sdevelopment is credited to Jeff De Luca and Peter Coad, when working on a banking project in Singapore way back 1997.

The project through the FDD concept is divided into “features”, known to be small pieces of a complete project.

Feature Driven Development Roles

FDD has six (6) key roles:

  • Project Manager (PM). The administrative head of the project. He does reporting progress, handling budgets, and managing equipment, space and resources, etc.
  • Chief Architect (CA). The one responsible for the overall design of the system.
  • Development Manager (DM). Leads day-to-day development activities.
  • Chief Programmers. Are experienced programmers who’ve been through the entire software development lifecycle a few times before.
  • Class Owners. Are developers working as members of small development teams under the guidance of the Chief Programmer.
  • Domain Experts. They are users, sponsors, business analysts, or a combination of these, known to be the knowledge base that the developers rely on, enabling them to deliver the correct system.

Other supporting roles, include:

  • Release Manager. Ensures that Chief Programmers report progress weekly. He then reports directly to the Project Manager.
  • Language Guru. Responsible for knowing a programming language or a specific technology inside out.
  • Build Engineer. The one who sets up, maintain, and run the regular build process.
  • Toolsmith. The one responsible behind the creation of small development tools for the development team, test team, and data conversion team.
  • System Administrator. A person responsible in configuring, managing, and troubleshooting any servers and network of workstations that are specific to the project team.

Additional roles are as follows:

  • Tester. Independently verifies that the system’s functions meet the users’requirements and that the system performs those functions correctly.
  • Deployer. Converts existing data to the new formats as required by the new system and work on thephysical deployment of new releases of the system.
  • Technical Writer.The one who writes and prepares online and printed user documentation.

Feature Driven Development Processes

FDD has five (5) basic processes steps:

  • Developing an overall model. Cross-functional, iterative and highly collaborative. FDD pushes team members to work together to build an object model of the domain area as guided by the Chief Architect. When detailed domain models are created, these models are progressively merged into an overall model after.
  • Building the list of features. By using the model on the previous step, the team or the chief programmer builds a list of features that would be useful to users and could be completed along a set timeline for release.
  • Planning by feature. It’s all about organizing. Here, plans are laid in which order the features will be implemented. Teams are then selected and assigned feature sets.
  • Designing by feature. On this stage, the chief programmer chooses the features for development and assigns them to feature teams consisting of the project manager; the chief architect; the development manager; the domain expert; the class owner; and the chief programmer.
  • Building by feature. Feature teams have complete coding, testing, and documentation of each feature, then advance the feature to the main build.

Best practices

Feature Driven Development is built around overall software engineering best practices:

  • Identify the domain object model, or the scope of the problem that needs to be solved.
  • Break down complex features into smaller functions and subsets.
  • Assign features to a single owner in order to ensure consistency and code integrity.
  • Build dynamic and diverse feature teams to collect multiple design options.
  • Perform routine code inspections of each feature before implementing into the main build.
  • Enforce project visibility through frequent, accurate progress reports during all steps.


An iterative and incremental software development methodology, Feature Driven Development (FDD) aims to develop high-level features, scope and domain object model and then utilize that to plan, design, develop and test the specific requirements and tasks upon the overarching feature that they belong to.

This concept is ideal for projects that have large development teams, follow pre-defined standards and require quick releases.


If you enjoy reading about software development methodologies, let’s take a look at other blog posts.

Software Development Methodology – Extreme Programming

Software Development Methodology – Scrum

Software Development Methodology – Kanban

Software Development Methodology – Lean

Software Development Methodology – Dynamic Systems Development Method (DSDM)

Software Development Methodology – Crystal

By Tuan Nguyen

software development methodology

Software Development Methodology – Dynamic Systems Development Method (DSDM)

DSDM, or also known as Dynamic Systems Development Method is an agile project delivery framework, addressing the full project lifecycle and its impact on the business, including the guidance needed to bring a product through the entire project, and even the releases.

The method has a four-phase framework, namely:

  • Feasibility and business study;
  • Functional model / prototype iteration;
  • Design and build iteration; and
  • Implementation.

It is an iterative, incremental approach that was first conceived in 1994 when project managers using another agile framework, the Rapid Application Development (RAD) methodology, determined that the new iterative approach to software development should need more governance and stricter guidelines.

team meeting

Source: Emotive Brand

Tl; dr;

Dynamic Systems Development Method (DSDM) is a framework largely based around Rapid Application Development (RAD). It focuses on Information Systems projects that are characterized by tight schedules and budgets.

The method’s primary aim is to deliver business needs and real-time benefits. Also, DSDM also make sure that benefits are clear, has feasible solution, with solid foundations already in place before a project is started.

DSDM Project Delivery Method

dsdm lifecycle

Source: Methods & Tools

DSDM is an agile development type that prioritizes schedule and quality over functionality.

It uses the MoSCoW method of prioritization, breaking a project into four (4) different types of requirements:

  • Must have (M) – requirement critical to a project’s success.
  • Should have (S) – possible requirements that represent a high-priority item that should be included in the solution.
  • Could have (C) – less critical requirements and often seen as nice to have items.
  • Won’t have (W) – least-critical and as what the name suggests will not be covered in the project timeframe.

DSDM Principles

This method is most frequently applied for software development projects, with suitability in any industry and any project (big or small).

It has eight (8) principles:

  • Focus on the business need. The team should understand business priorities and commit in delivering at least the Minimum Usable Subset.
  •  Deliver on time. The team splits up the work into increments prioritizing project requirements; and protecting the deadlines to ensure that the project is delivered on real-time. For long-term projects, they are delivered on-time using the on-time delivery of each increment, or Timebox.
  • Collaborate. Successful collaboration is achieved through partnering with the right stakeholders. Thus, improving the whole team’s performance.
  • Never compromise quality. The desired quality of the project products has been already agreed by the team even at the beginning of the project by defining the acceptance criteria.
  • Build incrementally from firm foundations. Before any significant resources are dedicated to the project delivery, the team builds a solid understanding of the project requirements and proposed solution to create a strong foundation. After each delivered increment, priorities and ongoing project viability are reassessed.
  • Develop iteratively. Having results demonstration and business feedback after every iteration, teams encourage creativity, learning, and experimentation through iterative development.
  • Communicate continuously and clearly. As informal communication encouraged, daily stand-up meetings and workshops are conducted so that communication needs of the project are fulfilled.
  • Demonstrate control. Project managers should conduct planning and progress tracking because they are considered crucial for the project to be managed under control.


Intended to be more than just a framework forthe creation of software development packages in increments, DSDM is a full life-cycle approach beyond software development projects.

It uses the MoSCoW method of prioritization which are the Must have (M), Should have (S), Could have (C), and Won’t have (W), having an integral part with the project’s success.

Through DSDM, teams get to focus on business needs, make projects deliver on time, successfully collaborate for better performance, can already agree on the desired quality of the project products even at the earliest part of the development, build incrementally from strong foundation, develop iteratively, communicate continuously and clearly,and demonstrate control over the project.


If you enjoy reading about software development methodologies, let’s take a look at other blog posts.

Software Development Methodology – Extreme Programming

Software Development Methodology – Scrum

Software Development Methodology – Kanban

Software Development Methodology – Lean

Software Development Methodology – Feature Driven Development (FDD)

Software Development Methodology – Crystal

By Tuan Nguyen