Blog

  • VMware ESXi step-by-step Installation Guide with Screenshots

    As part of the on-going VMware article series, earlier we discussed about VMware virtualization fundamentals, and how to install VMware Server 2.

    In this article, let us discuss about how to install VMware ESXi.

    VMware ESXi is free. However, the software comes with a 60 days evaluation mode. You should register on VMware website to get your free license key to come out of the evaluation mode. Once the ESXi is installed, you can either user vSphere Client on the Direct Console User Interface to administer the host.

    VMware ESXi is based on hypervisor architecture that runs directly on top of a hardware as shown below.

    1. Download ESXi server

    Get the software from the VMware ESXi download page.

    Following are the various download options available. Select “ESXi 4.0 Update 1 Installable (CD ISO) Binary (.iso)” and burn a CD.

    • ESXi 4.0 Update 1 Installable (CD ISO)
    • Upgrade package from ESXi Server 3.5 to ESXi Server 4.0 Update 1
    • Upgrade package from ESXi Server 4.0 to ESXi Server 4.0 Update 1
    • VMware vSphere Client and Host Update Utility

    2. VMware VMvisor Boot Menu

    Once you insert the ESXi CD and reboot the server, it will display a boot menu with an option to launch “ESXi Installer” as shown below.

    3. VMware ESXi Installer Loading

    While the installer is loading all the necessary modules, it will display the server configuration information at the top as shown below. In this example, I was installing VMware ESXi 4.0 on a Dell PowerEdge 2950 server.

    4. New ESXi Install

    Since this is a new installation of ESXi, select “Install” in the following screen.

    5. Accept VMware EULA

    Read and accept the EULA by pressing F11.

    6. Select a Disk to Install VMware ESXi

    VMware ESXi 4.0.0 Installer will display all available disk groups. Choose the Disk where you would like to install the ESXi. It is recommended to choose the Disk0.

    7. Confirm ESXi Installation

    Confirm that you are ready to start the install process.

    8. Installation in Progress

    The installation process takes few minutes. While the ESXi is getting installed, it will display a progress bar as shown below.

    9. ESXi Installation Complete

    You will get the following installation completed message that will prompt you to reboot the server.

    10. ESXi Initial Screen

    After the ESXi is installed, you’ll get the following screen where you can configure the system by pressing F2.

    In the next article, let us review how to perform the initial ESXi configuration.

  • My experience assembling a reprap 3D printer

    The idea of having a 3D printer was infecting my thoughts as a new experience so I decided to have one. while searching internet for which model I shocked from the prices which reduce my passion of having a 3d printer; but while digging more and more in the internet desert I found an oasis of knowledge about 3D printer. this oasis called RepRap which described as the humanity’s first general-purpose self-replicating manufacturing machine. RepRap takes the form of a free desktop 3D printer capable of printing plastic objects. Since many parts of RepRap are made from plastic and RepRap prints those parts, RepRap self-replicates by making a kit of itself – a kit that anyone can assemble given time and materials. It also means that – if you’ve got a RepRap – you can print lots of useful stuff, and you can print another RepRap for a friend.

    this raised the idea again inside my head. it toke me 3 months to decide which RepRap I should build. finally I decided to get some thing small (Prusa i3). I wrote a list of required parts and start collecting its prices and listed all in one list. I found that it will cost me around $600 so I started searching for some vendors on ebay. I shocked when I fount a seller sell new unassembled printer for $350 so I toke less than 2 seconds to think on this offer and take the action.

    As I am in Qatar it will take long time to get a package like this (I thought) so I used one of the fastest shipping agency (Aramex) that provide a shipping addresses around the world

  • IIS Sharepoint Performance tips

    http://www.monitis.com/blog/2011/06/13/top-8-application-based-iis-server-performance-tips/

    http://sharepointpromag.com/sharepoint-2010/top-10-sharepoint-2010-configuration-mistakes-and-how-fix-them

    https://www.leansentry.com/Guide/IIS-AspNet-Hangs

    http://forums.iis.net/t/1147420.aspx?Intermittent+Slow+Response+Times

    http://blog.fpweb.net/troubleshooting-sharepoint-sluggishness-server-side-issues/#.VhgNrROqpBc

     

  • Backup and Restoring MySQL database

    Back up From the Command Line (using mysqldump)

    If you have shell or telnet access to your web server, you can backup your MySQL data by using the mysqldump command. This command connects to the MySQL server and creates an SQL dump file. The dump file contains the SQL statements necessary to re-create the database. Here is the proper syntax:

    $ mysqldump –opt -u [uname] -p[pass] [dbname] > [backupfile.sql]
    • [uname] Your database username
    • [pass] The password for your database (note there is no space between -p and the password)
    • [dbname] The name of your database
    • [backupfile.sql] The filename for your database backup
    • [–opt] The mysqldump option

    For example, to backup a database named ‘Tutorials’ with the username ‘root’ and with no password to a file tut_backup.sql, you should accomplish this command:

    $ mysqldump -u root -p Tutorials > tut_backup.sql

    This command will backup the ‘Tutorials’ database into a file called tut_backup.sql which will contain all the SQL statements needed to re-create the database.

    With mysqldump command you can specify certain tables of your database you want to backup. For example, to back up only php_tutorials and asp_tutorials tables from the ‘Tutorials’ database accomplish the command below. Each table name has to be separated by space.

    $ mysqldump -u root -p Tutorials php_tutorials asp_tutorials > tut_backup.sql

    Sometimes it is necessary to back up more that one database at once. In this case you can use the –database option followed by the list of databases you would like to backup. Each database name has to be separated by space.

    $ mysqldump -u root -p –databases Tutorials Articles Comments > content_backup.sql

    If you want to back up all the databases in the server at one time you should use the –all-databases option. It tells MySQL to dump all the databases it has in storage.

    $ mysqldump -u root -p –all-databases > alldb_backup.sql

    The mysqldump command has also some other useful options:

    –add-drop-table: Tells MySQL to add a DROP TABLE statement before each CREATE TABLE in the dump.

    –no-data: Dumps only the database structure, not the contents.

    –add-locks: Adds the LOCK TABLES and UNLOCK TABLES statements you can see in the dump file.

    The mysqldump command has advantages and disadvantages. The advantages of using mysqldump are that it is simple to use and it takes care of table locking issues for you. The disadvantage is that the command locks tables. If the size of your tables is very big mysqldump can lock out users for a long period of time.

    Back up your MySQL Database with Compress

    If your mysql database is very big, you might want to compress the output of mysqldump. Just use the mysql backup command below and pipe the output to gzip, then you will get the output as gzip file.

    $ mysqldump -u [uname] -p[pass] [dbname] | gzip -9 > [backupfile.sql.gz]

    If you want to extract the .gz file, use the command below:

    $ gunzip [backupfile.sql.gz]

    Restoring your MySQL Database

    Above we backup the Tutorials database into tut_backup.sql file. To re-create the Tutorials database you should follow two steps:

    • Create an appropriately named database on the target machine
    • Load the file using the mysql command:
    $ mysql -u [uname] -p[pass] [db_to_restore] < [backupfile.sql]

    Have a look how you can restore your tut_backup.sql file to the Tutorials database.

    $ mysql -u root -p Tutorials < tut_backup.sql

    To restore compressed backup files you can do the following:

    gunzip < [backupfile.sql.gz] | mysql -u [uname] -p[pass] [dbname]

    If you need to restore a database that already exists, you’ll need to use mysqlimport command. The syntax for mysqlimport is as follows:

    mysqlimport -u [uname] -p[pass] [dbname] [backupfile.sql]
  • Comparing Databases with mysqldbcompare

    If you have two or more database servers containing the same data, how do you know if the objects are identical. Furthermore, how can you be sure the data is the same on all of the servers? What is needed is a way to determine if the databases are in synch – all objects are present, the object definitions are the same, and the tables contain the same data. Synchronizing data can become a nightmare without the proper tools to quickly identify differences among objects and data in two databases. Perhaps a worst case (and more daunting) is trying find data that you suspect may be different but you don’t have any way of finding out.

    This is where the new ‘mysqldbcompare’ utility comes in handy. The mysqldbcompare utility uses the mysqldiff functionality (mysqldiff allows you to find the differences in object definitions for two objects or a list of objects in two databases) and permits you to compare the object definitions and the data among two databases. Not only will it find the differences among database objects and their definitions, it will also find differences in the data!

    The databases can reside on the same server or different servers. The utility performs a consistency check to ensure two databases are the same defined as having the same list of objects, identical object definitions (including object names), and for tables the same row counts and the same data.

    Some scenarios where mysqldbcompare can be employed include:

    • checking master and slave for consistency
    • checking production and development databases for consistency
    • generating a difference report for expected differences among new and old data
    • comparing backups for differences

    Running the Utility

    Let us take a look at the utility in action. Below are two examples of the utility comparing what should be the same database on two servers. I am using a simple detail shop inventory database used to manage supplies. It consists of two tables (supplier, supplies) and three views (cleaning, finishing_up, and tools).

    In the first example, we see an example where the databases are consistent. When you examine the output, you will see each object is inspected in three passes. First, the object definitions are compared. Any discrepancies would be displayed as a difference in their CREATE statements. If the object is a table, a row count test is performed followed by a comparison of the data. Surely, if the row count test fails we know the data check will fail.

    Note: This is the same output (and indeed the same code) that is used for mysqldiff. The mysqldbcompare utility has all of the same features with respect to difference type presented (unified, differ, and context) which you can select with the same option named ‘–difftype’.

    mysqldbcompare –server1=root@localhost –server2=root@backup_host:3310 inventory:inventory
    # server1 on localhost: … connected.
    # server2 on localhost: … connected.
    # Checking databases inventory on server1 and inventory on server2


    Defn Row Data
    Type Object Name Diff Count Check
    —————————————————————————
    TABLE supplier pass pass pass
    TABLE supplies pass pass pass
    VIEW cleaning pass – –
    VIEW finishing_up pass – –
    VIEW tools pass – –


    Databases are consistent.


    # …done

    Normally, the mysqldbcompare utility will stop on the first failed test. This default means you can run the utility as a safeguard on data that you expect to be consistent. However, if you suspect or know there will be differences in the database objects or data and want to run all of the checks, you can use the ‘–run-all-tests’ option. This option will run the tests on all objects even if some tests fail. Note that this does not include system or access errors such as a down server or incorrect login – those errors will cause the utility to fail with an appropriate error message.

    In the second example, we expect the databases to be different and we want to know which data is different. As you can see, the utility found differences in the object definitions as well as differences in the data. Both are reported.

    mysqldbcompare –server1=root@localhost –server2=root@backup_host:3310 inventory:inventory –run-all-tests
    # server1 on localhost: … connected.
    # server2 on localhost: … connected.
    # Checking databases inventory on server1 and inventory on server2


    WARNING: Objects in server1:inventory but not in server2:inventory:
    VIEW: finishing_up
    VIEW: cleaning


    Defn Row Data
    Type Object Name Diff Count Check
    —————————————————————————
    TABLE supplier pass FAIL FAIL


    Row counts are not the same among inventory.supplier and inventory.supplier.


    Data differences found among rows:
    — inventory.supplier
    +++ inventory.supplier
    @@ -1,2 +1,2 @@
    code,name
    -2,Never Enough Inc.
    +2,Wesayso Corporation


    Rows in inventory.supplier not in inventory.supplier
    code,name
    3,Never Enough Inc.


    TABLE supplies pass FAIL FAIL


    Row counts are not the same among inventory.supplies and inventory.supplies.


    Data differences found among rows:
    — inventory.supplies
    +++ inventory.supplies
    @@ -1,4 +1,4 @@
    stock_number,description,qty,cost,type,notes,supplier
    -11040,Leather care,1,9.99,other,,1
    -11186,Plastic polish,1,9.99,polishing,,1
    -11146,Speed shine,1,9.99,repair,,1
    +11040,Leather care,1,10.00,other,,1
    +11186,Plastic polish,1,10.00,polishing,,1
    +11146,Speed shine,1,10.00,repair,,1


    Rows in inventory.supplies not in inventory.supplies
    stock_number,description,qty,cost,type,notes,supplier
    11104,Interior cleaner,1,9.99,cleaning,,1
    11056,Microfiber and foam pad cleaner,1,9.99,cleaning,,1
    11136,Rubber cleaner,1,9.99,cleaning,,1
    11173,Vinyl and rubber dressing,1,9.99,cleaning,,1
    11106,Wheel cleaner,1,9.99,cleaning,,1
    11270,Carpet cleaner,1,9.99,cleaning,,1


    Rows in inventory.supplies not in inventory.supplies
    stock_number,description,qty,cost,type,notes,supplier
    11269,Microfiber spray on car wash towel,3,16.99,cleaning,,1
    11116,Microfiber wax removal towel,3,16.99,waxing,,1
    10665,Glass polish pad,3,10.00,polishing,,1


    VIEW tools FAIL – –


    — inventory.tools
    +++ inventory.tools
    @@ -1,1 +1,1 @@
    -CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `inventory`.`tools` AS select `inventory`.`supplies`.`stock_number` AS `stock_number`,`inventory`.`supplies`.`description` AS `description`,`inventory`.`supplies`.`qty` AS `qty`,`inventory`.`supplies`.`cost` AS `cost`,`inventory`.`supplies`.`type` AS `type`,`inventory`.`supplies`.`notes` AS `notes`,`inventory`.`supplies`.`supplier` AS `supplier` from `inventory`.`supplies` where (`inventory`.`supplies`.`type` = ‘tool’)
    +CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `inventory`.`tools` AS select `inventory`.`supplies`.`stock_number` AS `stock_number`,`inventory`.`supplies`.`description` AS `description`,`inventory`.`supplies`.`qty` AS `qty`,`inventory`.`supplies`.`cost` AS `cost`,`inventory`.`supplies`.`type` AS `type`,`inventory`.`supplies`.`notes` AS `notes`,`inventory`.`supplies`.`supplier` AS `supplier` from `inventory`.`supplies` where (`inventory`.`supplies`.`type` in (‘tool’,’other’))


    Database consistency check failed.

    Take a moment to read through the report above. At the top of the report (the first object tested), we see a critical error in the suppliers table. Here we can see that there are two different names for the same supplier_id. All relational database theory aside, that could spell trouble when it comes time to reorder supplies.

    Notice the report for the supplies table. In this example, the utility identified three rows that were different among the two databases. It also identified rows that were missing from either table. Clearly, that could help you diagnose what went wrong where in your application (or your data entry).

    Lastly, we see a possible issue with the tools view. Here the view definition differs slightly. Depending the use of the view this may be acceptable but it is nice to know nonetheless.

    What Does This All Mean?

    If data consistency is important to you or if you need a way to quickly determine the differences among data in two databases, the mysqldbcompare utility is a great addition to your toolset. But don’t take my word for it – try it out yourself.

    Download and Try It Out!

    The MySQL Utilities project is written in Python and is a component of the MySQL Workbench tool. You can download the latest release of the MySQL Workbench here:

    http://dev.mysql.com/downloads/workbench/

    There are some limitations in this first release. Currently, if the storage engines differ among the tables in the compare, the object definitions will show this difference – which is exactly what you would expect. However, the utility will stop and report the object definition test as a failure. You can still run the data consistency check by using the –force option which instructs the mysqldbcompare utility to run all tests unless they fail from an internal exception (for example, the table files are corrupt).

    Got Some Ideas or Want to See New Features?

    One really cool feature would be if the utility generate SQL statements for synchronizing the data and objects. Would this be something you would want to see incorporated?

    If you’re intrigued by this new utility and in your course of use find new features or new uses that you would like to see incorporated in future revisions, please email me and let me know!

    Related Material

    Th latest MySQL Utilities development tree (including mysqldbcompare) can be found here:

    https://launchpad.net/mysql-utilities

  • Compare and Synchronize Databases with MySQL Utilities

    The mysqldiff and mysqldbcompare utilities were designed to produce a difference report for objects and in the case of mysqldbcompare the data. Thus, you can compare two databases and produce a report of the differences in both object definitions and data rows.

    While that may be very useful, would it not be much more useful to have the ability to produce SQL commands to transform databases? Wait no longer! The latest release of MySQL Utilities has added the ability to generate SQL transformation statements by both the mysqldiff and mysqldbcompare utilities.

    To generate SQL transformations in either utility, simply use the –sql option to tell the utility to produce the statements.

    Object Transformations with mysqldiff

    If you would like to compare the schema of two databases (the objects and their definitions), mysqldiff can do that for you and produce a difference report in a number of formats including CSV, TAB, GRID, and Vertical (like the mysql client’s \G option).

    However, its greatest feature is the ability to generate transformation statements to alter the objects so that they conform. Best of all, mysqldiff works on all object types including the ability to recognize renames so you can get a true transformation path for all objects. For even greater flexibility, you can generate the difference in both directions. This means you can generate transformations for db1-to-db2 as well as db2-to-db1 in the same pass. Cool.

    The following shows an example of running mysqldiff on two servers where some of the objects have diverged. It also shows how you can generate the reverse transformation statements.

    $ mysqldiff –server1=root@localhost –server2=root@otherhost \
    –changes-for=server1 –show-reverse util_test:util_test \
    –force –difftype=SQL
    # server1 on localhost: … connected.
    # server2 on localhost: … connected.
    # WARNING: Objects in server1.util_test but not in server2.util_test:
    # EVENT: e1
    # Comparing util_test to util_test [PASS]
    # Comparing util_test.f1 to util_test.f1 [PASS]
    # Comparing util_test.p1 to util_test.p1 [PASS]
    # Comparing util_test.t1 to util_test.t1 [PASS]
    # Comparing util_test.t2 to util_test.t2 [PASS]
    # Comparing util_test.t3 to util_test.t3 [FAIL]
    # Transformation for –changes-for=server1:
    #
    ALTER TABLE util_test.t3
    DROP COLUMN b,
    ADD COLUMN d char(30) NULL AFTER a
    ENGINE=MyISAM;
    #
    # Transformation for reverse changes (–changes-for=server2):
    #
    # ALTER TABLE util_test.t3
    # DROP COLUMN d,
    # ADD COLUMN b char(30) NULL AFTER a,
    # ENGINE=InnoDB;
    #
    # Comparing util_test.trg to util_test.trg [FAIL]
    # Transformation for –changes-for=server1:
    #
    DROP TRIGGER IF EXISTS `util_test`.`trg`;
    CREATE DEFINER=root@localhost TRIGGER util_test.trg BEFORE UPDATE ON util_test.t1
    FOR EACH ROW INSERT INTO util_test.t1 VALUES(‘Wax on, wax off’);
    #
    # Transformation for reverse changes (–changes-for=server2):
    #
    # DROP TRIGGER IF EXISTS `util_test`.`trg`;
    # CREATE DEFINER=root@localhost TRIGGER util_test.trg AFTER INSERT ON util_test.t1
    # FOR EACH ROW INSERT INTO util_test.t2 VALUES(‘Test objects count’);
    #
    # Comparing util_test.v1 to util_test.v1 [FAIL]
    # Transformation for –changes-for=server1:
    #
    ALTER VIEW util_test.v1 AS
    select `util_test`.`t2`.`a` AS `a` from `util_test`.`t2`;
    #
    # Transformation for reverse changes (–changes-for=server2):
    #
    # ALTER VIEW util_test.v1 AS
    # select `util_test`.`t1`.`a` AS `a` from `util_test`.`t1`;
    #
    Compare failed. One or more differences found.

    Generating Data Transformation with mysqldbcompare

    The mysqldbcompare utility provides all of the object difference functionality included in mysqldiff along with the ability to generate transformation SQL statements for data. This means you can make sure your test or development databases are similar to your production databases or perhaps even your offline, read only databases match your online databases. Like mysqldiff, you can also get the reverse transformations at the same time. Very cool, eh?

    The following shows an example of running mysqldbcompare to generate differences in data.

    $ mysqldbcompare –server1=root@localhost –server2=root@otherhost \
    inventory:inventory -a –difftype=sql –changes-for=server1 \
    –show-reverse
    # server1 on localhost: … connected.
    # server2 on localhost: … connected.
    # Checking databases inventory on server1 and inventory on server2
    #
    # WARNING: Objects in server1.inventory but not in server2.inventory:
    # VIEW: finishing_up
    # VIEW: cleaning
    #
    […]
    # TABLE supplier pass FAIL FAIL
    #
    # Row counts are not the same among inventory.supplier and inventory.supplier.
    #
    # Transformation for –changes-for=server1:
    #
    # Data differences found among rows:
    UPDATE inventory.supplier SET name = ‘Wesayso Corporation’ WHERE code = ‘2’;
    INSERT INTO inventory.supplier (code, name) VALUES(‘3’, ‘Never Enough Inc.’);
    #
    # Transformation for reverse changes (–changes-for=server2):
    #
    # # Data differences found among rows:
    # UPDATE inventory.supplier SET name = ‘Never Enough Inc.’ WHERE code = ‘2’;
    # DELETE FROM inventory.supplier WHERE code = ‘3’;
    #
    # Database consistency check failed.
    #
    # …done
  • Can Drupal Sites Run Effectively on a Windows Server?

    http://www.dagency.co.uk/drupal-blog/can-drupal-sites-run-effectively-on-a-windows-server

    It’s not uncommon for us to receive requests to build, maintain and/or host Drupal websites on Windows servers.

    Here are the main reasons why:

    Familiarity – The client is aware of the Windows brand and trusts it.

    Experience – The client has existing IT staff, well versed in administrating and maintaining Windows servers.

    Costs – The client wants to make use of their existing Windows environment to save costs.

    Integration – The project needs integration between a Drupal solution and Windows specific software such as a back-end office, ERP or CRM.

    Considerations

    Setting up Drupal on Windows server might seem like the logical solution, but here are a few things to consider when considering deploying Drupal on a platform that is not naturally optimized for this use:

    • If set-up, deployment, development and testing tasks take longer, then development costs will be higher.
    • If core updates, security patches and contributed module work takes longer to implement, then ongoing maintenance costs will be higher. If these tasks are skipped because they’re difficult to implement efficiently, then the chances of a security breach are increased.
    • Drupal is a big beast and performance optimization is essential to a successful project. Slow loading pages increase friction, create stress, cause abandonment, and may affect your visibility in Google search.

    Challenges

    1. Development Skills Crossover

    The Drupal community now numbers over a million individuals (based on active drupal.org member accounts).  Let’s be conservative and say that a quarter of that number is developers.

    That’s a pretty healthy talent pool to dip into but is dwarfed by the number of skilled Windows technicians there are out there.  The problem is that there is not as much overlap as you might hope between the two.

    No developers are well versed in every available technology, and most will focus on a number of associated technologies or disciplines, so a Drupal developer is likely to be highly up to speed with PHP, MySQL, Apache, Linux, and quite possibly other CMS’s, programming languages, etc that go together with other parts of that knowledge base, (e.g. WordPress, node.js, etc).

    Likewise someone who has the skills to administrate a Windows server is quite likely to have experience with .NET, SQL Server, IIS, etc.

    Unfortunately, the reverse is also true… An individual that’s dedicated a large amount of their career to open source development (which is probably the case if they’ve ended up being a Drupal Developer) is less likely to have the same skills as a Microsoft certified technician and vice versa.

    Now I’m being careful to use words such as ‘likely’ here, since there will be many exceptions; highly talented and technologically agnostic individuals who can fit into both sets of shoes, but it’s safe to say these are very much a minority.

    Which means that the large talent pool we were looking at before has now shrunk significantly.  There’s no reliable source of numbers here but I’m guessing that you’d be looking at most a few thousand individuals worldwide… many of whom will not be available to contract in.

    You can imagine that it will be difficult to find the right people and probably more costly when you do.

    If you do already have any tame Drupal Developers who are also Windows server administrators on the side then you’re in a fortunate position, but bear in mind that you would be in the same position if they become unavailable or the relationship breaks down.

    2. Robustness

    Drupal can and does run on Windows… There are almost certainly examples of Drupal sites on Windows out there.

    However installations on Windows will be a significant minority.  Again it’s difficult to get hard numbers on this but I wouldn’t be surprised if less that 1% of Drupal sites run on Windows.

    If the proportion is 1% then that means only 1% of site building and maintenance time is occurring on Windows based builds, and if we assume that Windows sites get the same average traffic then only 1% of end user interaction occurs on Windows hosted sites.

    In addition, the vast majority of development of the code within Drupal itself and contributed modules and themes is likely to have been done on a LINUX/UNIX based operating system running Apache.

    The upshot of this is that if there are issues with Drupal (or it’s modules/themes) that only appear on one operating system or server software then they are far more likely to have been found and solved already on say a LAMP stack then on Windows/IIS.

    3. Optimization

    During the process of putting together any software, including content systems such as Drupal, hard decisions have to be made based on performance.

    An optimization that might improve performance in one set of circumstances could reduce it in another, and sometimes in these situations the developers end up join deciding that the improvement for the many trumps the degradation for the few.  In the case of Windows / Linux / Anything else, it’s pretty easy to imagine which system will end up getting the most benefit from optimization.

    The individual decisions may not make much difference on their own, but the combined effect of many such performance decreases across a system as large as Drupal will likely be significant.

    Additionally, there’s far more chance that performance issues that exist in Drupal on Windows may not have been identified / isolated / resolved purely because of the much smaller amount of time that can go into testing and developing on Windows.

    4. Support

    I’m talking about support from Drupal core developers, module maintainers and the community in general here.

    If you encounter an issue that occurs in Drupal on Windows and is down to some low level difference in the way that Windows or IIS works, and it cannot be solved without Drupal core or a module being modified then you may find it difficult to get the help you need.

    This could be because the relevant people (e.g. the maintainer of the module in question) don’t…

    Have the relevant experience in the differences that are causing the issue to identify or resolve it

    Have the facilities readily available and set up to replicate it

    Regard it as a priority This may seem churlish but in reality if they have an issue queue with 20 open issues which could all effect 99% of users then how much priority do you think they will give to issues that only effect 1%?

    Summary

    Whilst Drupal can run on Windows, it may not be possible to run your particular project as efficiently on Windows over a Drupal tuned hosting stack or cloud instance such as Acquia.

    If you can’t, then less efficiency means more development and maintenance cost, less reliability in certain instances, and a potential increase in friction between everyone involved in making the project a success.

    Just to be clear, we don’t have any kind of anti-Windows agenda here. There’s no suggestion that Windows is in any way inferior as a hosting environment, you just need a very unique combination of expertise to match what’s possible with a Drupal specific environment that’s been designed to make workflow, performance and security as good as it can be.

  • Ten Reasons to Dump Windows and Use Linux

    Now is a particularly good time to ditch Windows for good, for workstations as well as servers. For instance, now that Microsoft stopped supporting Windows Server 2003 on July 13, you’ll need to find something different to use for your servers. Whether it’s switching from Windows Server 2003 to 2008 or to Linux-based servers–or changing out tired and faulty Windows Vista desktops for the alien Windows 7 or something more user-friendly–Linux provides you with freedom and freedom of choice.

    You might believe that dumping Windows and switching to Linux is a difficult task, but the change in thought and the perception of that switch are the most difficult. If you’ve attempted an upgrade from Windows XP to Windows 7, you know what pain is.

    Business owners find that Linux, for what was once a “niche” operating system, provides the necessary components and services on which many rely. Linux continues its entry into the world’s largest data centers, onto hundreds of thousands of individual desktops, and it represents a near 100 percent domination of the cloud services industry. Take the time to discover Linux and use it in your business. Here are ten reasons to give Linux at least a second look:

    1. Commercial Support

    In the past, businesses used the lack of commercial support as the main reason for staying with Windows. Red Hat, Novell and Canonical, the “big three” commercial Linux providers, have put this fear to rest. Each of these companies offers 24x7x365 support for your mission-critical applications and business services.

    2. .NET Support

    Businesses that have standardized on Microsoft technology, specifically their .NET web technology, can rely on Linux for support of those same .NET applications. Novell owns and supports the Mono project that maintains .NET compatibility. One of the Mono project’s goals is to provide businesses the ability to make a choice and to resist vendor lock-in. Additionally, the Mono project offers Visual Studio plugins so that .NET developers can easily transfer Windows-based .NET applications without changing their familiar development tools. Why would Novell and others put forth the effort to create a .NET environment for Linux? For real .NET application stability, Linux is a better choice than Windows.

    3. Unix Uptimes

    Linux stability offers business owners the peace of mind that their applications won’t suffer lengthy outages due to operating system instability. Linux enjoys the same high uptimes (often measured in years) that its Unix cousins do. This stability means that Linux can support your “99.999 percent available” service requirements. Rebooting after every patch, service pack, or driver change makes Windows an unstable and unreliable choice for those who need nonstop support for their critical applications and services.

    4. Security

    No operating system is 100 percent secure and Linux is no exception. But, Linux offers excellent security for its users. From regular kernel updates to an almost daily list of security patches, Linux code maintainers keep Linux systems very secure. Business owners who rely on commercially supported Linux systems will have access to every available security fix. With Linux, you have a worldwide community providing security fixes, not a single company with closed source code. You are completely dependent on the response of one company to provide you with timely security fixes when you use Windows.

    5. Transferable skills

    One barrier to Linux adoption was the idea that Linux isn’t enough like Unix, and therefore Unix administrators couldn’t successfully use their knowledge when making the switch to Linux. The Linux filesystem layout looks like any commercial version of Unix. Linux also uses a standard set of Unix commands. There are some Linux commands that do not transfer, but this is also true of any version of Unix.

    Windows administrators might find that using a keyboard instead of a mouse is a difficult part of the transition, but once they discover the power of the command line, they might never click again. Don’t worry, though, for you GUI-bound Windows types, Linux has several desktop managers from which to choose–not just one.

    6. Commodity hardware

    Business owners will like the fact that their “out-of-date” systems will still run Linux and run it well. Fortunately for Linux adopters, there’s no hardware upgrade madness that follows every new version of the software that’s released. Linux runs on x86 32-bit and 64-bit architectures. If your system runs Windows, it will run Linux.

    7. Linux is free

    You may have heard that Linux is free. It is. Linux is free of charge and it is free in the sense that it is also free of patents and other restrictions that make it unwieldy for creative business owners who wish to edit and enhance the source code. This ability to innovate with Linux has helped create companies like Google, who have taken that ability and converted it into big business. Linux is free, as in freedom.

    8. Worldwide community

    Linux has the support of a worldwide community of developers who contribute to the source code, security fixes and system enhancements. This active community alsoprovides businesses with free support through forums and community sites. This distributed community gives peace of mind to Linux users, because there’s no single point of failure and no single source for Linux support or development.

    9. Linux Foundation

    The Linux Foundation is a corporate collective of platinum supporters (Fujitsu, Hitachi, HP, IBM, Intel, NEC, Novell and Oracle) and members who, through donations and membership dues, sponsor Linus Torvalds and others who work on Linux full time. Their purpose is to “promote, protect and standardize Linux to fuel its growth around the world.” It is the primary source for all things Linux. The Linux Foundation is a big positive for Linux users and adopters because its existence assures continued development of Linux.

    10. Regular Updates

    Are you tired of waiting for a Windows service pack every 18 months? Are you also tired of the difficulty in upgrading your Windows systems every few years because there’s no clear upgrade path? (Ubuntu Linux offers new, improved versions every six months) and long-term support (LTS) versions every two years. Every Linux distribution offers regular updates of its packages and sources several times per year and security fixes as needed. You can leave any upgrade angst in your officially licensed copy of Windows because it’s easy to upgrade and update Linux. And, the best part? No reboot required.

    If you’d like to give Linux a try, there are several distributions that are free to download and use without the need for any commercial support contract:

    CentOS – Red Hat Enterprise Linux-based free distribution

    Ubuntu – Free, enterprise Linux distribution (Commercial support available).

    Fedora – The Fedora Project is the free, community-supported version of Red Hat Linux.

    OpenSUSE – The free, community-supported version of Novell’s SUSE Linux.

    Debian – The parent distribution for many Linux distributions including Ubuntu andLinux Mint.

    You can find information regarding switching from Windows to Linux through the Linux Foundation or any of its platinum members. When it comes to increasing your efficiency, saving money, and providing non-stop services to your business and its customers, how many reasons do you need?

  • HOW DO UBUNTU SERVER AND WINDOWS SERVER 2012 COMPARE?

    In the home computing world, Windows is the dominant force, Mac comes in second, and Linux plays third fiddle. In the server world things are a bit different, however. Linux still outranks Windows, though not quite as badly as it did a few years ago.

    When it comes to Linux servers, Debian and Ubuntu are probably two of the more popular distros out there. Since Ubuntu also is the most popular standard consumer OS, let’s compared Ubuntu to Windows Server 2012 to figure out which is right for your business.

    Whether you are considering switching from Linux to Windows, or even are currently using Windows Server but thinking about Linux over an upgrade to Server 2012 – this will help give you an idea of what both are about.

    The overview might miss a few big hitters or features, but it at least helps paint a picture.

    Let’s start with Ubuntu:

    UBUNTU FOR SERVER

    Ubuntu has become a big force in the Linux world, despite the fact that many Linux purists don’t care much for the UnityUI that has brought more of a mainstream look and feel to Linux.

    Some of the best features for Ubuntu on Servers include the following:

    Ubuntu Software Center

    When it comes to finding programs for managing your server, USC makes life easier. The terminal still is the preferred way for doing many things in Linux, but this certainly comes in handy as well.

    Raid Configurations

    Making raid arrays is actually pretty cheap and easy in Ubuntu, thanks to the mdadm tool. You don’t need to use CLI and there is even a tool that tells you if the raid is degrading and will even help you rebuild the array.

    File Sharing & Storing

    Although you might be considering Linux (or currently using it) for your server, it is still more than likely that many, if not all, of your workstations will run on Windows PCs. That is why the ability to share files and storage with Windows PCs is important – luckily Linux handles this well enough.

    Security & Data Protection

    Ubuntu has built-in firewalls turned on by default and has automatic security updates with file encryption support. There are also advanced features like password vaults and due to the nature of Linux, it is relatively malware and virus proof (though not completely).

    Ubuntu Server Cost

    If your organization doesn’t mind “being on its own” when it comes to customer support and aide, Ubuntu Server is totally free. Looking for support? Canonical offers it starting at $320 per server, per year.

    WINDOWS SERVER 2012

    Windows Server 2012 hasn’t been out for that long yet but it truly brings many great new features to the table. This time around there is the new MetroUI (as seen with Windows 8) alongside the traditional desktop. There is also a much stronger cloud focus in the latest server version, as well as many new features.

    Windows Apps

    From the Windows App Store to commercial Windows apps, there is a ton of software that works for Windows. While Linux also has quite a bit of software (most open-source), Windows Server has even more.

    Raid Configurations

    If Raid configurations are important to you, you’ll be happy to know thatMicrosoft put a lot of focus into this with Server 2012. The latest version of Windows Server includes a brand new feature called “Storage Pool and Spaces”.

    What is that exactly? It is like a raid0 but without needing to strip the data across all disks. If one drive fails, you simply replace it and keep your data. Unlike raid-5 it doesn’t take half the space for backup drives and also utilizes the very efficient ReFS file system.

    File Sharing & Storing

    There probably isn’t much to say here, sharing and storing is ultra-simple with Windows Server 2012. Ubuntu does a good job here, but Server 2012 does a better one.

    Security & Data Protection

    Windows Server 2012 goes along way into making the experience more secure than past versions of Windows Server, merging their security suit into a comprehensive anti-virus/malware system called Windows Defender. There is also Bitlocker Protection to encrypt your data.

    All in all, this is one of the most secure Windows experiences to date, but Linux is admittedly stronger in this aspect.

    Other Unique Windows Advantages

    As seen in one of our other articles, some of the more unique aspects of Server 2012 includes its major push towards Hyper-V, its ability to turn off and on a GUI at will, and a unique ability to ‘stream’ intensive apps to low power Windows devices – including making it possible to run x86 desktop apps on your network even on Windows RT devices like the Surface RT.

    Price

    Depending on the version your business needs, you are likely talking about close to $1000, if not tons more than that. Obviously if you are a DIY kind of organization that doesn’t need the customer support, Ubuntu is a lot cheaper.

    While Ubuntu (and any Linux server distro for that matter) has a lot to offer, Microsoft also has many unique features that make it work great in an existing Windows environment. Ultimately it is up to you to decide which OS works better for your business, though.

  • ASP.NET Web Application Project Deployment Overview

    https://msdn.microsoft.com/library/dd394698(v=vs.100).aspx

    http://www.asp.net/mvc/overview/deployment/visual-studio-web-deployment/deploying-to-iis

  • Publishing Content to Multiple Sites, Manage from a Single Location

     

    August 2014: Publishing Content to Multiple Sites, Manage from a Single Location

    A growing organization had requirements for ongoing communications to client teams compromised of internal employees as well as customers on the client side. Communications were coming from varied sources on a multitude of channels without uniformity and often without adherence to corporate standards.

    One of the channels the company was using to communicate to these teams was the individual, secure portals they were employing for collaboration – uploading content and sharing of information.

    This premise of publishing to these portals was ideal but the effort often took days to complete and so the organization approached Abel Solutions for help minimizing the time to author and publish content via an individualized, secure space.

    A major requirement was that a content editor would need to manage company announcement content in a single location publishing to many consuming sites. The author should be able to set an expiration date, tag an announcement as active or archived as well as limit the number of announcements displayed at a time. The company envisioned their announcements as an image with a short description. Upon clicking on the announcement title, it would render the full article for reading. This required the company to restrict authoring content to a subset of employees.

    To simplify content management, there was an expectation that someone could author once and walk away—not having to reproduce the same information multiple times. SharePoint 2013 proved to be an ideal solution for their portals and for increasing productivity, eliminating administrative overhead, securing content, and publishing compliant content to all client sites with a single click of the mouse.

    Solution
    The SharePoint 2013 Product Catalog feature was implemented to achieve this goal. In this article, you will learn how to implement a product catalog solution for publishing content from a single location out to multiple site collections.

    NOTE: This solution was designed and implemented for SharePoint 2013 Enterprise [On-Premise].

    Implementation Requirements:

    1. SharePoint Server 2013 Enterprise [On-Premise]
    2. At minimum, two site collections – an authoring site and publishing site
    3. Managed Metadata Service
    4. Administrative permissions

    Authoring Site
    Let’s begin by designing the Authoring site collection that will be used to manage content for the announcements (i.e., image library and announcement lists).

    1. Create a site collection
    2. Go to Site Settings
    3. Under Site Collection Administration, select Site Collection Features
    4. Find Cross-Site Collection Publishing, click Activate

    Next, let’s create a Term Set that will be used to determine whether an announcement is Active or Archived.

    1. Access the Term Store Management Tool via Site Settings.
    2. Under Site Administration, select Term Store Management
    3. Create a term set with terms defining announcements Active or Archived.
    4. For this term set, under the Intended Use tab, select Available for Tagging.
    5. Select Save.

    In the meantime, proceed to create a SharePoint list for managing announcement content. Create an announcements list creating and adding appropriate columns as needed. It is important to remember the columns and their associated Managed Properties for search capabilities. Managed Properties are what the Search Engine uses to find information or values, not columns themselves. Map a field to the term set created above.

    1. Access the List Settings, the select Catalog Settings.
    enable-library
    2. Select Enable this library as a catalog and other appropriate settings (i.e., Navigation Hierarchy) and fields as necessary.

    Now, you’re ready to inform SharePoint 2013 Search that new columns are available. Through Central Admin or from List Settings > Reindex List, continue to initiate a Full Crawl.

    reindex-list
    Publishing Site
    As for the Publishing Site, create a site collection with preferred template of choice. Accessing Site Settings > Manage Catalog Connections.

    connect
    Click Connect for your published Catalog

    connect-search-web-part
    Designate desired location to publish announcements, insert a Content Search Web Part onto the page. Select Change Query to build your query for search results. In the web part panel, select a Display Template best fit for your needs. In some instances, you may have to create a custom Display Template if out of the box options are not sufficient. Select the appropriate Property Mappings for available fields.NOTE: Please remember to restrict the search results on only the published catalog.

    property-mapping
    Summary
    If there is a need to manage content from a centralized, secure location to other information websites in a time-efficient way, SharePoint 2013 Product Catalog feature is an excellent, easy-to-implement solution. This solution lessens the strain on resources responsible for the process of managing content to multiple websites.

    This TOTM was contributed by SharePoint Consultant, Recortis Echols.

     

     

    https://technet.microsoft.com/en-us/library/jj635883.aspx

  • GlusterFS Drupal

    Introduction

    Redundancy and high availability are necessary for a very wide variety of server activities. Having a single point of failure in terms of data storage is a very dangerous configuration for any critical data.

    While many databases and other software allows you to spread data out in the context of a single application, other systems can operate on the filesystem level to ensure that data is copied to another location whenever it is written to disk. A clustered storage solution like GlusterFS provides this exact functionality.

    In this guide, we will be setting up a redundant GlusterFS cluster between two 64-bit Ubuntu 12.04 VPS instances. This will act similar to an NAS server with mirrored RAID. We will then access the cluster from a third 64-bit Ubuntu 12.04 VPS.

    General Concepts

    A clustered environment allows you to pool resources (generally either computing or storage) in order to allow you to treat various computers as a single, more powerful unit. With GlusterFS, we are able to pool the storage of various VPS instances and access them as if it were a single server.

    GlusterFS allows you to create different kinds of storage configurations, many of which are functionally similar to RAID levels. For instance, you can stripe data across different nodes in the cluster, or you can implement redundancy for better data availability.

    In this guide, we will be creating a redundant clustered storage array, also known as a distributed file system. Basically, this will allow us to have similar functionality to a mirrored RAID configuration over the network. Each independent server will contain its own copy of the data, allowing our applications to access either copy, which will help distribute our read load.

    Steps to Take on Each VPS

    There are some steps that we will be taking on each VPS instance that we are using for this guide. We will need to configure DNS resolution between each host and setting up the software sources that we will be using to install the GlusterFS packages.

    Configure DNS Resolution

    In order for our different components to be able to communicate with each other easily, it is best to set up some kind of hostname resolution between each computer.

    If you have a domain name that you would like to configure to point at each system, you can follow this guide to set up domain names with DigitalOcean.

    If you do not have a spare domain name, or if you just want to set up something quickly and easily, you can instead edit the hosts file on each computer.

    Open this file with root privileges on your first computer:

    sudo nano /etc/hosts
    

    You should see something that looks like this:

    127.0.0.1       localhost gluster2
    
    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    

    Below the local host definition, you should add each VPS’s IP address followed by the long and short names you wish to use to reference it.

    It should look something like this when you are finished:

    127.0.0.1       localhost hostname
    first_ip gluster0.droplet.com gluster0
    second_ip gluster1.droplet.com gluster1
    third_ip gluster2.droplet.com gluster2
    
    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    

    The gluster0.droplet.com and gluster0 portions of the lines can be changed to whatever name you would like to use to access each droplet. We will be using these settings for this guide.

    When you are finished, copy the lines you added and add them to the /etc/hosts files on your other VPS instances. Each /etc/hosts file should contain the lines that link your IPs to the names you’ve selected.

    Save and close each file when you are finished.

    Set Up Software Sources

    Although Ubuntu 12.04 contains GlusterFS packages, they are fairly out-of-date, so we will be using the latest stable version as of the time of this writing (version 3.4) from the GlusterFS project.

    We will be setting up the software sources on all of the computers that will function as nodes within our cluster, as well as on the client computer.

    We will actually be adding a PPA (personal package archive) that the project recommends for Ubuntu users. This will allow us to manage our packages with the same tools as other system software.

    First, we need to install the python-software-properties package, which will allow us to manage PPAs easily with apt:

    sudo apt-get update
    sudo apt-get install python-software-properties
    

    Once the PPA tools are installed, we can add the PPA for the GlusterFS packages by typing:

    sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4
    

    With the PPA added, we need to refresh our local package database so that our system knows about the new packages available from the PPA:

    sudo apt-get update
    

    Repeat these steps on all of the VPS instances that you are using for this guide.

    Install Server Components

    In this guide, we will be designating the two of our machines as cluster members and the third as a client.

    We will be configuring the computers we labeled as gluster0 and gluster1 as the cluster components. We will use gluster2 as the client.

    On our cluster member machines (gluster0 and gluster1), we can install the GlusterFS server package by typing:

    sudo apt-get install glusterfs-server
    

    Once this is installed on both nodes, we can begin to set up our storage volume.

    On one of the hosts, we need to peer with the second host. It doesn’t matter which server you use, but we will be preforming these commands from our gluster0 server for simplicity:

    sudo gluster peer probe gluster1.droplet.com
    
    peer probe: success
    

    This means that the peering was successful. We can check that the nodes are communicating at any time by typing:

    sudo gluster peer status
    
    Number of Peers: 1
    
    Hostname: gluster1.droplet.com
    Port: 24007
    Uuid: 7bcba506-3a7a-4c5e-94fa-1aaf83f5729b
    State: Peer in Cluster (Connected)
    

    At this point, our two servers are communicating and they can set up storage volumes together.

    Create a Storage Volume

    Now that we have our pool of servers available, we can make our first volume.

    Because we are interested in redundancy, we will set up a volume that has replica functionality. This will allow us to keep multiple copies of our data, saving us from a single point-of-failure.

    Since we want one copy of data on each of our servers, we will set the replica option to “2”, which is the number of servers we have. The general syntax we will be using to create the volume is this:

    sudo gluster volume create volume_name replica num_of_servers transport tcp domain1.com:/path/to/data/directory domain2.com:/path/to/data/directory ... force
    

    The exact command we will run is this:

    sudo gluster volume create volume1 replica 2 transport tcp gluster0.droplet.com:/gluster-storage gluster1.droplet.com:/gluster-storage force
    
    volume create: volume1: success: please start the volume to access data
    

    This will create a volume called volume1. It will store the data from this volume in directories on each host at /gluster-storage. If this directory does not exist, it will be created.

    At this point, our volume is created, but inactive. We can start the volume and make it available for use by typing:

    sudo gluster volume start volume1
    
    volume start: volume1: success
    

    Our volume should be online currently.

    Install and Configure the Client Components

    Now that we have our volume configured, it is available for use by our client machine.

    Before we begin though, we need to actually install the relevant packages from the PPA we set up earlier.

    On your client machine (gluster2 in this example), type:

    sudo apt-get install glusterfs-client
    

    This will install the client application, and also install the necessary fuse filesystem tools necessary to provide filesystem functionality outside of the kernel.

    We are going to mount our remote storage volume on our client computer. In order to do that, we need to create a mount point. Traditionally, this is in the /mnt directory, but anywhere convenient can be used.

    We will create a directory at /storage-pool:

    sudo mkdir /storage-pool
    

    With that step out of the way, we can mount the remote volume. To do this, we just need to use the following syntax:

    sudo mount -t glusterfs domain1.com:volume_name path_to_mount_point
    

    Notice that we are using the volume name in the mount command. GlusterFS abstracts the actual storage directories on each host. We are not looking to mount the /gluster-storage directory, but thevolume1 volume.

    Also notice that we only have to specify one member of the storage cluster.

    The actual command that we are going to run is this:

    sudo mount -t glusterfs gluster0.droplet.com:/volume1 /storage-pool
    

    This should mount our volume. If we use the df command, you will see that we have our GlusterFS mounted at the correct location.

    Testing the Redundancy Features

    Now that we have set up our client to use our pool of storage, let’s test the functionality.

    On our client machine (gluster2), we can type this to add some files into our storage-pool directory:

    cd /storage-pool
    sudo touch file{1..20}
    

    This will create 20 files in our storage pool.

    If we look at our /gluster-storage directories on each storage host, we will see that all of these files are present on each system:

    # on gluster0.droplet.com and gluster1.droplet.com
    cd /gluster-storage
    ls
    
    file1  file10  file11  file12  file13  file14  file15  file16  file17  file18  file19  file2  file20  file3  file4  file5  file6  file7  file8  file9
    

    As you can see, this has written the data from our client to both of our nodes.

    If there is ever a point where one of the nodes in your storage cluster is down and changes are made to the filesystem. Doing a read operation on the client mount point after the node comes back online should alert it to get any missing files:

    ls /storage-pool
    

    Restrict Access to the Volume

    Now that we have verified that our storage pool can be mounted and replicate data to both of the machines in the cluster, we should lock down our pool.

    Currently, any computer can connect to our storage volume without any restrictions. We can change this by setting an option on our volume.

    On one of your storage nodes, type:

    sudo gluster volume set volume1 auth.allow gluster_client_IP_addr
    

    You will have to substitute the IP address of your cluster client (gluster2) in this command. Currently, at least with /etc/hosts configuration, domain name restrictions do not work correctly. If you set a restriction this way, it will block all traffic. You must use IP addresses instead.

    If you need to remove the restriction at any point, you can type:

    sudo gluster volume set volume1 auth.allow *
    

    This will allow connections from any machine again. This is insecure, but may be useful for debugging issues.

    If you have multiple clients, you can specify their IP addresses at the same time, separated by commas:

    sudo gluster volume set volume1 auth.allow gluster_client1_ip,gluster_client2_ip
    

    Getting Info with GlusterFS Commands

    When you begin changing some of the settings for your GlusterFS storage, you might get confused about what options you have available, which volumes are live, and which nodes are associated with each volume.

    There are a number of different commands that are available on your nodes to retrieve this data and interact with your storage pool.

    If you want information about each of your volumes, type:

    sudo gluster volume info
    
    Volume Name: volume1
    Type: Replicate
    Volume ID: 3634df4a-90cd-4ef8-9179-3bfa43cca867
    Status: Started
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: gluster0.droplet.com:/gluster-storage
    Brick2: gluster1.droplet.com:/gluster-storage
    Options Reconfigured:
    auth.allow: 111.111.1.11
    

    Similarly, to get information about the peers that this node is connected to, you can type:

    sudo gluster peer status
    
    Number of Peers: 1
    
    Hostname: gluster0.droplet.com
    Port: 24007
    Uuid: 6f30f38e-b47d-4df1-b106-f33dfd18b265
    State: Peer in Cluster (Connected)
    

    If you want detailed information about how each node is performing, you can profile a volume by typing:

    sudo gluster volume profile volume_name start
    

    When this command is complete, you can obtain the information that was gathered by typing:

    sudo gluster volume profile volume_name info
    
    Brick: gluster1.droplet.com:/gluster-storage
    --------------------------------------------
    Cumulative Stats:
     %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
     ---------   -----------   -----------   -----------   ------------        ----
          0.00       0.00 us       0.00 us       0.00 us             20     RELEASE
          0.00       0.00 us       0.00 us       0.00 us              6  RELEASEDIR
         10.80     113.00 us     113.00 us     113.00 us              1    GETXATTR
         28.68     150.00 us     139.00 us     161.00 us              2      STATFS
         60.52     158.25 us     117.00 us     226.00 us              4      LOOKUP
     
        Duration: 8629 seconds
       Data Read: 0 bytes
    Data Written: 0 bytes
    . . .
    

    You will receive a lot of information about each node with this command.

    For a list of all of the GlusterFS associated components running on each of your nodes, you can type:

    sudo gluster volume status
    
    Status of volume: volume1
    Gluster process                                         Port    Online  Pid
    ------------------------------------------------------------------------------
    Brick gluster0.droplet.com:/gluster-storage             49152   Y       2808
    Brick gluster1.droplet.com:/gluster-storage             49152   Y       2741
    NFS Server on localhost                                 2049    Y       3271
    Self-heal Daemon on localhost                           N/A     Y       2758
    NFS Server on gluster0.droplet.com                      2049    Y       3211
    Self-heal Daemon on gluster0.droplet.com                N/A     Y       2825
    
    There are no active volume tasks
    

    If you are going to be administering your GlusterFS storage volumes, it may be a good idea to drop into the GlusterFS console. This will allow you to interact with your GlusterFS environment without needing to type sudo gluster before everything:

    sudo gluster
    

    This will give you a prompt where you can type your commands. This is a good one to get yourself oriented:

    help
    

    When you are finished, exit like this:

    exit
    

    Conclusion

    At this point, you should have a redundant storage system that will allow us to write to two separate servers simultaneously. This can be useful for a great number of applications and can ensure that our data is available even when one server goes down.

  • MYSQL cluster

    https://www.digitalocean.com/community/tutorials/how-to-set-up-mysql-master-master-replication

     

     

    Building database clusters with MySQL

  • CREATE Android IOS Cordova

    https://crosswalk-project.org/documentation/cordova/cordova_4.html

    https://www.scirra.com/tutorials/71/how-to-make-native-phone-apps-with-construct-2-and-phonegap

    https://www.scirra.com/tutorials/top/page-1?cat=100

     

     

    Actually these are the basic commands needed,

    CODE: SELECT ALL
    cordova create appname com.example.appname appName
    cd appname
    cordova platform add android
    cordova plugin add cordova-plugin-crosswalk-webview
    cordova plugin add com.cranberrygame.phonegap.plugin.ad.admob

    And here in the appname folder there will be a folder called “www”. so, you just need to replace the contents of this folder with the “cordova exported files from C2”

    CODE: SELECT ALL
    Cordova build
    cordova run

    And running the above 2 command will generate apk and run it with the attached android phone,

     

     

     

  • Real-time editing in Office 2016

    http://www.theverge.com/2015/5/4/8547433/microsoft-office-2016-real-time-co-authoring-features

    http://www.theverge.com/2013/11/7/5075192/office-web-apps-real-time-editing-features

     

    http://www.pcworld.com/article/2105861/one-year-later-microsoft-offices-collaboration-tools-are-still-a-work-in-progress.html

     

    http://www.pcworld.com/article/2033437/collaboration-in-microsoft-office-painful-but-not-impossible.html

    http://www.computerworld.com/article/2943496/enterprise-applications/microsofts-office-2016-preview-gets-realtime-editing-in-word-and-more.html