Archive for the ‘help’ Category

Don’t Suck at Email

Wednesday, December 9th, 2009

spam

One of the bigger lessons I learned over my time in Boulder at TechStars was this: Don’t Suck at Email. Now that I consider myself to be quite good at email, it pains me to see people suck at it. And the sad truth is that most people truly suck at email. This was a topic we discussed a lot over the Summer, so I hope in sharing this information I can help a few people.

Seven Rules to Not Sucking at Email

1. Use the Subject Line – Sounds simple, but it amazes me how many people send out emails with useless subjects like “hey”, or worse – no subject at all. The subject line is not only the first glimpse a person gets of your reason for contacting them (which is extremely important if you are cold-emailing someone), but it also is a key piece of information that people might search on when trying to find your email some time down the road. Take a moment to actually think about the purpose of your email. Keep it between 2 and 7 words. Make it descriptive and succinct.

2. The “Three Sentence Rule” – This is one that can be tricky to use across all emails you send, but it is definitely worth using when you are reaching out to people who (a) you don’t know personally, (b) you have never contacted before, or (c) you know suck at replying to emails. Keep your email body down to three sentences. I know that you might feel the need to put more information into an email than three sentences, but the reality is that the people on the other end of the line are giant question-marks. You don’t know how busy they are, how much they suck at email, how interested they are in what you have to say, etc. If you go above three sentences, there is a high likelihood that they will not reply to your email. It can be challenging at first, but eventually you’ll find that you can get your point across in an extremely succinct manner. Only ask them one question, and put it in your last sentence. This leaves the question lingering in the other person’s mind, and it allows them to quickly shoot you back a response without feeling the pressure of a mass-volume, heavy-content email that will require more than 1 minute of their time. Most importantly, it gets the volley of conversation started, so your more detailed questions or information can follow-on in a conversation that the other person is now invested in.

3. Spell Check – If you are not great at spelling, use the spell checker. Nothing makes you look dumber than bad spelling and bad grammar. Simple, but true.

4. Reply to Important Emails Right Away – I used to get important emails and decide that I needed to think about the response for a long time before replying. I didn’t want to send knee-jerk emails back that had incomplete information. So, I’d wait a day, maybe two days, or sometimes as long as a week. Two things happen when you do this: First, the person on the other end thinks that (a) you didn’t get the email, (b) you don’t care about the email, or (c) you are a complete idiot. Second, you could possibly forget to ever reply at all. So, when I get important emails, I reply to them right away – even if I don’t have all of the information the person needs, I’ll tell them that I don’t have it, but I’ll get it to them by X date, and then I set a reminder and make sure that I get them that information by the time I said I would.

5. Use “Unread” Status – This is a habit I’ve picked up, and I find it extremely useful. If I read an email that isn’t very important, but does require a response from me, I’ll leave it marked as “unread” until I have the time or information required to respond. Every time I open my email program, I see X unread messages, and I am reminded of the emails I need to respond to. At least once a day I know I have the time to respond to those emails (typically first thing in the morning), so I’ll go back and make sure that everyone gets the information they need.

6. Be Conscious of How Much You Suck – If you send out emails that you consider important and you don’t get a response, think about why that might be. Go back up to the points above and compare the rules to the email you sent: Did you use a descriptive subject? Was the body of your email full of too much information, or did you stick to the three sentence rule? Did you only ask one question, or did you manage to squeeze more than one question into your three sentences? Did you have spelling mistakes? Was your grammar so bad that the email didn’t even make sense? If you’ve done a good job on all of those points, then we fall into point 4: the person you are trying to contact (a) didn’t get the email, (b) doesn’t care about the email, or (c) is a complete idiot. Because so many people suck at email, I’ve often found myself falling into the (b) category. No matter which way the cookie crumbles, you need to remember the most important rule of all when sending emails…

7. Be Persistent – No matter what the reason is for someone not replying to you, persistence will get you everywhere. The best way to be persistent and not be annoying is to use rules 1, 2, and 3. Keep your emails about the business at hand, and don’t let emotion get involved – which can be difficult if you’re dealing with someone who sucks at email. The last bit of advice I can give on this point is to remember that we all live in the real world. Email is fast and easy, but the reality is that not everyone uses it, and not everyone cares about it. I know it’s scary, but if you’re dealing with someone who sucks at email, sometimes you just have to pick up the phone and call them.

How to Handle a Pull Request from GitHub

Friday, November 13th, 2009

pull-requests

We decided to pick up Git for the Vanilla & Garden projects after discussions we had with people from many other companies while we were in TechStars this past summer. Git is still a bit of an enigma to me, and I’ve been receiving pull requests from people for a while, and I’ve failed to successfully get their changes into my code – instead opting to just manually apply their changes with my own IDE. That is, of course, a total waste of my time and contrary to the entire purpose of us adopting Git. So, today I finally sat down and dug my way through to figure out how to handle a pull request.

After a few hours of frustration, it finally makes sense. Here’s the long and short of it: Define the user’s remote repo, get a local copy of their work, go into the branch you want to pull their changes into, and cherry pick their commit into your branch.

Here are the actual commands I used to accomplish this for a number of different pull requests today:

Step 1. Do you already have their repo set up as a remote branch on your dev machine? Check with:

git remote -v

If not, add the remote branch and fetch the latest changes with:

git remote add -f <username> git://github.com/<username>/Garden.git

Note: “Garden” is the name of our project on github. Obviously, you would need to substitute that for your project name.

2. Do you already have a local copy of their repo? Check with:

git branch -a

If not, create it and check it out with:

git checkout -b <username>/master

If you do already have a local copy of their repo, fetch the latest changes:

git fetch <username>

3. Get their changes into your personal working branch:

git checkout master
git cherry-pick <hash of user's specific changes that they requested you to pull>

That’s it. I can’t believe it took me so long to figure that out!

CSS Help

Thursday, April 23rd, 2009

cascade

It’s that time, once again, when I can’t figure out how to do something with CSS and I need your help. I’ve created a full description of the problem and examples of solutions I’ve tried.

The long and short of it is that I have a control that writes messages to the screen. I am currently using an unordered list to render these messages, but I want to format them at the top & center of the screen. I want each message to be on it’s own line, and I want all of the messages to be encapsulated by a solid rectangular background. I want the solid rectangular background to be only as wide as it needs to be in order to encapsulate the widest message.

I’ve been able to accomplish this with a table, but that’s just semantically wrong.

Please check out the examples and let me know how you would solve this problem.

Update

Evdawg posted a solution that works cross-browser! I’ve updated the examples with his working version.

Update 2

Inky posted another kickass center-float solution that also works cross-browser. I’ve added that one to the examples as well.

Community Help: Lussumo.com Connectivity Issues [RESOLVED]

Tuesday, February 17th, 2009

Today you may have begun to notice database errors when attempting to load any of my websites. Particularly lussumo.com/community and markosullivan.ca/blog have been showing intermittent errors.

These errors have come at a particularly inopportune time (is there ever a good time?) because I am extremely busy with a new contract, development of the Garden framework, Vanilla 2, and I also manage to have a life in there somewhere (sometimes :).

When I began to notice the slow page-loading times on my server and then the errors that followed, I contacted my hosting company to find out what was going wrong. I am hosted at rackspace.com, and they are well known for their fanatical support. True to form, they got back to me quickly with a diagnosis of the problem:

Good Afternoon,

I have made some adjustments to the my.cnf configuration file in /etc

skip-bdb

query_cache_size=64M
query_cache_limit=12M

interactive_timeout=300
wait_timeout=300

tmp_table_size=128M
max_heap_table_size=128M

in order to decrease the high amount of disk I/O occuring on this server.  This should help with the query building by allocating more memory to this resource.  I have also disabled persistent MySQL connections from PHP:

mysql.allow_persistent = Off

It appears you are reaching your maximum connections limit for MySQL.  The above adjustments are conservative due to the low amount of physical memory you have on this server.

When your server runs out of physical memory, it resorts to using disk space (SWAP memory).  This swapping can and will cause your server to become unresponsive.

You may also consider increasing the amount of physical memory on this server with a RAM upgrade.  If you are interested in proceeding, I can send this ticket to a BDC who can assist you with this upgrade and update you on pricing for this component.

Besides processes in "sleep" status, indicating the use of persistent MySQL connections, it appears most of the connections are due to table locking occuring:

+-----+---------+-----------+-----------+---------+------+-------------------------------+------------------------------------------------------------------------------------------------------+
| Id  | User    | Host      | db        | Command | Time | State                         | Info                                                                                                 |
+-----+---------+-----------+-----------+---------+------+-------------------------------+------------------------------------------------------------------------------------------------------+
| 573 | xxxx | localhost | community | Query   |    9 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 574 | xxxx | localhost | community | Query   |   10 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 583 | xxxx | localhost | community | Query   |   10 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 584 | xxxx | localhost | community | Query   |    9 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 591 | xxxx | localhost | community | Query   |   10 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 593 | xxxx | localhost | community | Query   |   10 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 728 | xxxx | localhost | community | Query   |    5 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 729 | xxxx | localhost | community | Query   |    4 | Locked                        | select a.AddOnID  as AddOnID, a.AddOnTypeID  as AddOnTypeID, a.ApplicationID  as ApplicationID, a.Au | 
| 733 | xxxx | localhost | community | Query   |    3 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 734 | xxxx | localhost | community | Query   |    3 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 730 | xxxx | localhost | community | Query   |    3 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 735 | xxxx | localhost | community | Query   |    2 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 736 | xxxx | localhost | community | Query   |    2 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 737 | xxxx | localhost | community | Query   |    2 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 738 | xxxx | localhost | community | Query   |    0 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 739 | xxxx | localhost | community | Query   |    0 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs | 
| 740 | xxxx | localhost | community | Query   |    0 | Locked                        | SELECT t.DiscussionID  AS DiscussionID, t.FirstCommentID  AS FirstCommentID, t.AuthUserID  AS AuthUs |
+-----+---------+-----------+-----------+---------+------+-------------------------------+------------------------------------------------------------------------------------------------------+
as these queries are locking the table, subsequent queries are having to wait and thus stacking up taking available connections.  You may find that changing this table type to Innodb may help with this table locking issue.  You may need to discuss with your developers if this change would have an inverse affect to your applications.

As well, I have enabled slow query logging in:

/var/lib/mysqllogs/slow-log

which will log queries taking over 5 seconds to complete.  This information will help your developers to optimize any SQL queries and/or apply indexing where appropriate.

I have also put in the option in Apache:

MaxRequestsPerChild  1000

which will help to reduce the memory footprint of this service.

While it appears that the above changes helped with the non availabilty of MySQL, the server is still highly loaded.

Now, I always knew that the Vanilla 1 queries were hairy and could cause problems. I didn’t think it was going to happen any time soon, and I was hoping to get Vanilla 2 in place before this became an issue (Vanilla 2’s queries are much simpler and faster) – but it looks like that is not going to happen. Regardless, it would seem that my traffic has slowly and steadily been increasing at lussumo.com over the years. In December we peaked at 2.5 million page views for that month at lussumo.com alone, and we’ve maintained that amount of traffic almost every day since.

Obviously I could throw more RAM at the server as the Rackspace support person suggested – this seems to be a common answer to problems of this sort (we currently only have 1G of ram on the server), but I don’t know if that is the answer I should be looking for – especially considering that I’m already paying a lot of money for the server.

So, I am hoping that all of those who use Vanilla can step up to the plate and offer your expertise on how to resolve this issue. I am opening the doors and accepting any and all advice, questions, ideas on how to fix the problem.

Here is what I have tried so far:

* I reviewed the slow queries that mysql logged and found that 99% of them were Vanilla’s “comments page” and “discussions page” queries. I’ve uploaded a sample of the slow query log so you can see what queries are causing problems.

* I downloaded a copy of the Lussumo Community database to my local dev machine so I could get a good look at the tables, indexes, etc.

* I found that none of the indexes that are included with the current release of Vanilla 1 were applied on the tables (other than primary keys). This is probably due to the fact that I’ve just added columns as development has continued and never had a problem before now.

* I added the indexes that are shipped with the current release of Vanilla 1 to the community database. I found that this had little-to-no effect on the speed of the page-load (it might have even made the queries slower).

* I’ve created a script that converts all of the tables in the community db to innodb tables (as suggested by the rackspace tech). I’ve done some googling that has detailed both good and bad results of this type of change. It could start to throw fatal errors when data is being inserted (rather than while it’s being selected, as it is now). I have not yet run this script as I want to hear back from the community first.

* I’ve taken the community forums offline and enabled wp-cache on this blog so that everyone can have access to this blog post and be fully aware of the issue.

Help!

So, I am reaching out to you for help. No question is a dumb one. Any idea is welcome. Please share your expertise and help us to get this convoy back on the road…

Update

It turns out that I had forgotten to apply all of the indexes & optimizations to this database through the years that we’ve been online. The growth of our community, combined with poor indexing caused a couple of the tables to begin to lock. The LUM_User and LUM_UserDiscussionWatch tables in particular were locking. These tables are updated frequently with login information and discussion tracking information respectively. Because the tables were MyISAM type, all records would be locked when an update was applied to just a single row – this meant that all 9000+ user records would get locked whenever anyone’s “DateLastActive” field was updated, and all 90,000+ records in the LUM_UserDiscussionWatch table would get locked whenever anyone even looked at a single discussion (and the record of their view of that discussion was recorded).

To fix both of these issues, I changed their table types to InnoDB so that only the affected row should become locked when updates are applied.

I also analyzed the Discussions & Comments queries, which are (obviously) the most actively run queries in the application. The comments query was extremely slow. After running EXPLAIN on the query, I found that it was indexed incorrectly. For some reason the LUM_Comment table was using both the CommentID and the DiscussionID columns as it’s primary key. I removed the DiscussionID as a primary key and added it as a simple index. This allows the query to not scan the entire LUM_Comment table when performing the join to LUM_Discussion. I also found that the LUM_UserBlock table had no indexes at all, so I added those and was able to further reduce the query time. Here is a list of the changes that I made to the database for anyone who might be interested:

ALTER TABLE `community`.`LUM_Comment` DROP PRIMARY KEY,
 ADD PRIMARY KEY  USING BTREE(`CommentID`),
 ADD INDEX `comment_discussion`(`DiscussionID`);

ALTER TABLE LUM_UserBlock ADD INDEX (BlockingUserID);
ALTER TABLE LUM_UserBlock ADD INDEX (BlockedUserID);

ALTER TABLE LUM_User ENGINE=InnoDB;
ALTER TABLE LUM_UserDiscussionWatch ENGINE=InnoDB;

Thanks to Damien (Dinoboff) and Dave (Wallphone) for jumping in and offering some assistance.

CSS 101

Wednesday, February 11th, 2009

A bit of help, if you please…

Check out this page, which contains a 200px wide div floated left, and a fieldset. Note how the fieldset is completely contained to the right of the div.

Now check out this page, which also contains a 200px wide div floated left, but instead of a fieldset, next it has a div. Note how the div goes behind the floated element all the way to the left side of the screen.

I was under the impression that (at least in Firefox) all elements are created equal, and that the fieldset in the first example must just have some styles applied (by the browser) to make it stay completely to the right of the floated div. I was hoping that I could somehow force the div in my second example to behave like the fieldset in the first example, but I can’t seem to figure out how to make that happen without forcing specific margins on the div.

Any ideas?