Friday, 18 May 2018

what is Chmod 755


If you are new to understand Unix permissions . i suggest to read this Unix File system permissions and check this out.

what is chmod?

As per Wiki, In Unix-like operating systemschmod is the command and system call which may change the access permissions to file system objects (files and directories).

Link to Wiki: WIKI-CHMOD

Suppose you want to write to a file but you don't have access right to do so.  you will get write access abounded error.

So, logically we use chmod to change the permission access to enable the read/write/execute permissions with the help of sudo.

At first list out the file you want to alter

$ ls -l findPhoneNumbers.sh
-r-xr-xr--  1 dgerman  staff  823 Dec 16 15:03 findPhoneNumbers.sh
$ stat -c %a findPhoneNumbers.sh
554

with above command you can see the access given to the file
it states that the user and group does not have write access (have only read and execute ) access. so , user cannot delete this file.

sudo chmod 754  findPhoneNumbers.sh

The above command will change the user permission  to write as well . Hence, you can delete the file.

When it comes to Directory, The execute is mostly not used.  Only read /write is used to see and delete files in the directory.

$ ls -l shared_dir # show access modes before chmod
d rwx r-x r-x   2 teamleader  usguys 96 Apr 8 12:53 shared_dir

d represents directory and as you see  755 in which  only user is allowed to delete the folder.

Group and outside world don't have access to do so.

Mostly , we set 755 for folders and 644 for files.

Explanation :

644 means that files are readable and writeable by the owner of the file and readable by users in the group owner of that file and readable by everyone else.

755 is the same thing, it just has the execute bit set for everyone. The execute bit is needed to be able to change into the directory. This is why directories are commonly set to 755.

Regular HTML files need to be viewable by the Apache user (user nobody on cPanel servers). Since this user is typically not in the group of the ownership of the file (and if it were, and in a shared hosting environment every user would have to be in this group, which kind of defeats the purpose of limiting to 640 or 750) the world section of the permissions needs to be set to readable.

Now in a suPHP environment, PHP files can just as easily be set to 600. This is because the PHP files are read by the web server as the username specified in the virtualhost section in Apache. In a non-suPHP environment though, PHP files are still read by the apache user and therefore would require a world-readable bit. Again, this would only apply to PHP parsed files, not regular .html or .htm files.

Most scripts have separate config files which include login information. And yes, for those files I would recommend that they are set to a permission setting of 600 to prevent others from reading it. Other PHP files could also be set to 600, but you're really not saving yourself anything if the PHP files have no critical information included. For example, setting the permissions to Wordpress's main index.php file to 600 kind of defeats the point because someone can just download Wordpress from Wordpress's site and read the index.php file.

suPHP and PHP as CGI really are not a standard. PHP developers cannot recommend to set the permissions on the files to 600 because if PHP is running as a DSO module on the server, then using 600 permissions will not work. This is one reason why I think suPHP and PHP as CGI should be standard on any shared hosting server, but the owner of that server or the owner of the account on that server needs to realize that it is important to set the permissions on these config files to 600 and ignore the recommendations in the software's specifications.

Source: Cpanel





Unix File system Permissions

Hey Guys,

Today i will explain about the Linux file permissions that i learned.

So, In short it consists of three access permissions for below list

1. user/owner(u)
2. group(g)
3. outside world(o)

and permissions are

1. read(r)
2. write(w)
3.execute(x)

So, when you do ls -l , it displays the list of files under directory . it will be displayed like filename with -drwx---

Lets expand above structure:
  • -: "regular" file, created with any program which can write a file
  • b: block special file, typically disk or partition devices, can be created with mknod
  • c: character special file, can also be created with mknod (see /dev for examples)
  • d: directory, can be created with mkdir
  • l: symbolic link, can be created with ln -s
  • p: named pipe, can be created with mkfifo
  • s: socket, can be created with nc -U
  • Ddoor, created by some server processes on Solaris/openindiana.

Here is the linux file permission overview. image from stack exchange
UNIX file permissions

Numerical permissions

#Permissionrwx
7read, write and executerwx
6read and writerw-
5read and executer-x
4read onlyr--
3write and execute-wx
2write only-w-
1execute only--x
0none---
Unix Permission Calculator :

http://permissions-calculator.org/

Tuesday, 8 May 2018

Express Js best practices

1. Try - catch
2. Better Error handling
3. JSLint to see if any reference errors on undefined variables.
4. Try-catch works only for synchronous code. Because the Node platform is primarily asynchronous (particularly in a production environment), try-catch won’t catch a lot of exceptions.
5.Promises will handle any exceptions (both explicit and implicit) in asynchronous code blocks that use then(). Just add .catch(next) to the end of promise chains

Production Best Practices

exports vs module.exports in node js

Hey guys,

Today i understood the concept of exports and module.exports in node js.

so basically in Node js both are pointing to same object reference (empty object at first)

module.exports           exports 
             
               |                       /
               |                    /
                        { }

so its like module.exports= exports ;

Now exports is a object reference and you can start adding properties to the Object. like

//calc.js

exports.add = function() {
//bla bla
}
exports.number = 2;

when you import in other modules -> node Js returns only the module.exports in which exports is also pointed to the same. once you do this
 calc = require('calc');
 calc.add();

What if you want to return a function() instead a object property ?

once you assign exports with function and then you will loose module.exports reference but at the end node js returns module.exports

Hence you need to assign the function to
//calc
module.exports = function(){
    return x;
}


var calc = require(calc)();

so do like module.exports = exports = function() { } ==> make references to main object

Thursday, 3 May 2018

Mongo DB Schema Design

Hey Guys,

I just learned how to do better schema design in mongo db.

So basically a blog post in mongo db official website suggests 3 ways.


1. One to Few
2. One to Many
3. One to Squillions

So , one can choose which one is the best based your requirements, data(queries & updates) and read - update ratio


1. One to Few

Go for this when you a relation with less reference. In this case you can embed the referencing elements in the main document itself.

Consider an example of person and address. Instead of having address as separate collection, you can embed them in person. it will save read time but a con here is you cannot access them stand alone. Like if you want to get address alone you cannot.  So, choose this if you don't have requirements to pull standalone and few references and cardinality .


2. One to Many

Apt this if you have referencing documents less than 1000.  you can put all object references in an array of referencing object.

Consider an example of Product and it parts . so, you can put all the parts references in product document as an parts[] array. This help to retrieve data faster.  you can also refer the product id in parts side if you need more data access.

If you want to get the data then you should do application level Join on tables.

 db.products.findOne()
{
    name : 'left-handed smoke shifter',
    manufacturer : 'Acme Corp',
    catalog_number: 1234,
    parts : [     // array of references to Part documents
        ObjectID('AAAA'),    // reference to the #4 grommet above
        ObjectID('F17C'),    // reference to a different Part
        ObjectID('D2AA'),
        // etc
    ]

Application level join

// Fetch the Product document identified by this catalog number
> product = db.products.findOne({catalog_number: 1234});
   // Fetch all the Parts that are linked to this Product
> product_parts = db.parts.find({_id: { $in : product.parts } } ).toArray() ;

You can still denormalise this by placing needed fields in the product parts array like below but one drawback is that  if you change name of part collection which is referenced in product array then you need to update name in the parts array of entire product collection.
> db.products.findOne()
{
    name : 'left-handed smoke shifter',
    manufacturer : 'Acme Corp',
    catalog_number: 1234,
    parts : [
        { id : ObjectID('AAAA'), name : '#4 grommet' },         // Part name is denormalized
        { id: ObjectID('F17C'), name : 'fan blade assembly' },
        { id: ObjectID('D2AA'), name : 'power switch' },
        // etc
    ]
}
Making easier to get the part names would increase a bit of client-side work to the application-level join:
// Fetch the product document
> product = db.products.findOne({catalog_number: 1234});
  // Create an array of ObjectID()s containing just the part numbers
> part_ids = product.parts.map( function(doc) { return doc.id } );
  // Fetch all the Parts that are linked to this Product
> product_parts = db.parts.find({_id: { $in : part_ids } } ).toArray() ;

3. One to Squillions

Choose this if you have more than 1000 reference objects and you think it exceeds 16 MB of document  size.  Place the One side id in every document of referencing side. Its an example of parent-referencing.

> db.hosts.findOne()
{
    _id : ObjectID('AAAB'),
    name : 'goofy.example.com',
    ipaddr : '127.66.66.66'
}

>db.logmsg.findOne()
{
    time : ISODate("2014-03-28T09:42:41.382Z"),
    message : 'cpu is on fire!',
    host: ObjectID('AAAB')       // Reference to the Host document
}
Application Level Query :

// find the parent ‘host’ document
> host = db.hosts.findOne({ipaddr : '127.66.66.66'});  // assumes unique index
   // find the most recent 5000 log message documents linked to that host
> last_5k_msg = db.logmsg.find({host: host._id}).sort({time : -1}).limit(5000).toArray()






Wednesday, 2 May 2018

Best Practices for my back end dev in Node js


Node js framework is quite popular and ease to use. Below are few packages and practices i would like to follow for my new project.
  • Logger : bunyan
  • Editor: visual code
  • Codes : correctly use HTTP codes
  • Headers: extra headers prefix with user-defined name 
  • API : Restify over express
  • Black box test: super-test
  • Unit test: sinon
  • Ratelimit : limit API hits
  • API Doc: Swagger
  • ex for API documentation: Stripe API



Summary of Mongo DB Performance measures



1. Have smaller field names
2. Have replica sets and sharding
3. Each document of max size 16 MB .  Document  retrieved in one shot.
4. Eliminate unnecessary indexs
5. compound index over additional indexes
6. Identify & remove obsolete indexes. Indexes that are not used frequently or does not help in search.
7. Slow running queries with explain().
8. XFS file system and RAID 10 or SATA SSD
9. Users should always create indexes to support queries, but should not maintain indexes that queries do not use
10. To understand the effectiveness of the existing indexes being used, an $indexStats aggregation stage can be used to determine how frequently each index is used.
11. The set of data and indexes that are accessed during normal operations is called the working set. It is best practice that the working set fits in RAM.
12. Popular tools such as Chef and Puppet can be used to provision MongoDB instances.
13. MongoDB provides horizontal scale-out for databases using a technique called sharding.MongoDB distributes data across multiple
Replica Sets called shards.
14. Store counts in user profile for faster access than computing on the fly

Mongo DB Index creation limitations



Please consider below limitations before creating Index's in Mongo DB.

  •  A collection cannot have more than 64 indexes.
  •  Index entries cannot exceed 1024 bytes.
  •  The name of an index must not exceed 125 characters (including its namespace).
  •  In-memory sorting of data without an index is limited to 32MB. This operation is very CPU intensive, and in-memory sorts indicate an index should be created to optimize these queries.