Monday, November 17, 2008

Windows 7 [Vienna]

Windows 7( Vienna)

Formerly known as Blackcomb and Vienna is the successor of Windows Vista. Windows 7 is expected to be released sometime in 2010

Windows 7 (formerly known as Blackcomb and Vienna) is the working name for the next major version of Microsoft Windows as the successor of Windows Vista.[1] Microsoft has announced that it is "scoping Windows 7 development to a three-year timeframe", and that "the specific release date will ultimately be determined by meeting the quality bar."[2] Windows 7 is expected to be released sometime in 2010.[3] The client versions of Windows 7 will ship in both 32-bit and 64-bit versions.[2] A server variant, codenamed Windows Server 7, is also under development.

Friday, November 07, 2008

WHY EMPLOYEES LEAVE ORGANISATIONS ?

Every company faces the problem of people leaving the company for better pay or profile.

Early this year, Mark, a senior software designer, got an offer from a prestigious international firm to work in its India operations developing specialized software. He was thrilled by the offer.

He had heard a lot about the CEO. The salary was great. The company had all the right systems in place employee-friendly human resources (HR) policies, a spanking new office,and the very best technology,even a canteen that served superb food.

Twice Mark was sent abroad for training. "My learning curve is the sharpest it's ever been," he said soon after he joined.

Last week, less than eight months after he joined, Mark walked out of the job.

Why did this talented employee leave ?

Arun quit for the same reason that drives many good people away.

The answer lies in one of the largest studies undertaken by the Gallup Organization. The study surveyed over a million employees and 80,000 managers and was published in a book called "First Break All The Rules". It came up with this surprising finding:

If you're losing good people, look to their manager .... manager is the reason people stay and thrive in an organization. And he 's the reason why people leave. When people leave they take knowledge,experienc e and contacts with them, straight to the competition.

"People leave managers not companies," write the authors Marcus Buckingham and Curt Coffman.

Mostly manager drives people away?

HR experts say that of all the abuses, employees find humiliation the most intolerable. The first time, an employee may not leave,but a thought has been planted. The second time, that thought gets strengthened. The third time, he looks for another job.

When people cannot retort openly in anger, they do so by passive aggression. By digging their heels in and slowing down. By doing only what they are told to do and no more. By omitting to give the boss crucial information. Dev says: "If you work for a jerk, you basically want to get him into trouble. You don 't have your heart and soul in the job."

Different managers can stress out employees in different ways - by being too controlling, too suspicious,too pushy, too critical, but they forget that workers are not fixed assets, they are free agents. When this goes on too long, an employee will quit - often over a trivial issue.

Talented men leave. Dead wood doesn't.

What worries Me....

What worries me most about the credit crunch is that if one of my cheques is returned stamped "insufficient funds", I won't know whether that refers to mine or the bank's!

Wednesday, September 24, 2008

Changing Good Programmers into Great

Not everybody is cut out to be a programmer. But for those who are, there is no reason you, as a manager or executive, can't help them move from just good to great, amazing and even awesome. Jeff Cogswell shows you how.

During the past two decades, I've worked with some really great programmers and software developers. And, unfortunately, I've worked with more than a few who probably should have chosen a different field. But the vast majority of the programmers fell somewhere in the middle. They were good. Not amazing, but definitely not bad either.

For managers and executives who have programmers and software developers reporting to them, the variation in skill can present quite a problem when you're trying to build a great product. How can you transform the good programmers into fantastic, amazing, awesome programmers?

Believe it or not, you can. Let's see how to do it.

First, you need to make sure your programmers have the essential skills, the fundamentals. Some do; some don't. (Just because they survived an undergrad program in computer science doesn't meant they do.)

Now this is going to sound obvious, but at the very least, every software developer must be a master of writing good lines of code. You've seen those who aren't, the programmers who sit there for hours, staring at 10 lines of code, trying to figure out what's wrong and can't. This kind of thing can happen to all of us programmers occasionally. But the problem is the programmer who does that on a regular basis.

I've worked with these programmers and you've probably had some working for you. They would come to me all the time, interrupting my work, and drag me to their cube to debug their code.

And this is going to sound rough, but the reality is some people just aren't cut out for programming. I'm talking about a very small percentage of people here, fortunately. But they're out there. If you have such a programmer on your staff, it might be time for a meeting with HR and a talk about other opportunities, perhaps in sales, customer support, testing (QA) or some other area of the company. He or she may excel in these areas. But you probably don't want him or her dragging the whole team down.

Fortunately, that's just a small percentage. Let's talk about the huge population that are in the middle, those who are good but not amazing. These are the ones you can help.
In fact, many of them are future experts but are, right now, just younger and less experienced. Such people don't always know about all the issues that can arise in software development. This isn't a problem with their ability; it's really just a problem of inexperience, something they'll overcome with time.

Probably the single biggest issue that younger programmers overlook is the hidden complexity in today's software systems. This is true especially for today's Web-based systems that can serve multiple Web users simultaneously.

In the old days, we would run what was called "stress testing" on our desktop applications. This involved running a program that would put our computer into a low-memory, low-disk-space state, allowing us to see whether our software could function. But with today's multiuser Web sites, the biggest problems aren't so much stress on memory and disk space, since typically the software will be running on large servers with a team of IT people making sure there's plenty of both. Instead, today the problems come more from multiple users trying to do the same thing simultaneously. And that's where the less experienced programmers might fall short in their coding.

Here's an example: Suppose your team is developing an ASP.NET application that will be storing data in an XML file. Ask your team what it takes to write data to the file. If they’re inexperienced, they might express the answer very simply, as in:
· You open the file.
· You write to it.
· You close the file.
Or, you ask them how to read a file:
· You open the file.
· You read the data you need.
· You close the file.

Seems simple and straightforward enough. But it's not. There are actually far more complex issues that can come up, issues that experienced programmers are well aware of but less experienced programmers might overlook, causing major problems when the software is running in a production environment. For example, what if two people are visiting the site simultaneously? Both are entering data into a Web form that needs to be saved. Your server is handling both people at the same time. Remember, the servers can run multiple "threads" at once (that is, the program is running the same parts of the code simultaneously). A separate thread is used to handle each user.

And that's where things get messy. The programmers might have written the code to open the file, read the whole thing into memory and close the file. Then the program would add on the user's new data to the data in memory, and write the whole thing back to the file, effectively replacing the entire file. This is common practice and it works well.
The problem is that if there are two users accessing the system, both threads might open the file, read the data in and close it at roughly the same time. Then simultaneously each thread might modify its own private version of the data. The first thread will write the data to the file and close it. Then the second thread will do the same, perhaps a tiny moment later, overwriting the first thread's version, losing the first user's data.

Or, one thread might open the file for writing, and then the second thread might try to do the same but not be able to (because the operating system locked the file when the first thread opened it), and this thread might not handle the situation appropriately and could crash the whole site, causing error messages to show up in the browsers of all the people visiting the site.

I've seen this kind of thing happen many times. And that's when we programmers get a phone call at 3 in the morning because the operations team couldn't get the software up and running again. And then we have to either connect remotely or drag our butts into the office in the middle of the night, load up on caffeine and track down the problem.

And then we find exactly what the problem is and how to fix it. In our example in particular, it turns out the programmer would have been better off using a set of classes built into the .NET framework that allow for read and write locks on files. These classes are easy to use and take only a couple lines of code. Had the programmer used these, the problem wouldn't have occurred.

As a programmer, I remember seeing such mess-ups in code and complaining to others in the company about it. One tech writer friend of mine laughed and said, "Oh, you guys each have your own way of doing things, and neither is better than the other."

Oh, really? Well there's a good litmus test for determining if the code is right: Does it crash?
Good software doesn't crash. Good software doesn't cause phone calls in the middle of the night where panicked people have to try and figure out why the software crashed.

I've expressed this litmus test before to others, but was met with severe resistance from other programmers. People don't like criticism. But the fact is, perfect software doesn't crash. The reality is that with today's massive systems it's nearly impossible to get every single bug out. But it's certainly within reason to get as many as bugs as possible out, minimizing crashes as much as possible and not using the excuse that "Bugs are inevitable and we should live with them."

And writing code for a Web server that crashes when two users connect to it simultaneously is unacceptable.

Handling things correctly, a manager can teach his or her team to not allow such bugs in the first place, and can oversee the process to prevent such bugs. How can this be done?
First, the team (and the QA folks) must do their job in testing. It's easy to run through a test and see that the program works fine when only one user is accessing the software; it's also easy for you, as the manager, to see that it's working wonderfully and to feel good about it. But it's not so easy to run a real stress test where hundreds or even thousands of threads are running simultaneously, all trying to access and manipulate the data. That's when you'll discover the real problems, the kind that can bring a system to its knees.

To run these kinds of tests requires that you have a QA team of testers who know their tools and know how to simulate such conditions. And further, it's important that the coders are aware of the issues so that by the time their code gets to the QA team, it's already set up to handle high-load situations.

That brings me to the second point: The developers must be trained in how to write code that handles such situations correctly so the system doesn't crash. I said that some bugs will creep in, and as much as I don't want to live with that situation, I suppose I accept it as fact. (And your programmers, by the way, should have a similar attitude, rather than just shrugging and saying bugs are normal. Bugs are unacceptable, and we must stop as many as possible, but occasionally we have to accept that a couple might slip through.)

Thus, at a minimum the programmers must be aware of what can go wrong, and must know how to write code that handles those situations correctly. And that means writing code that is "thread-safe" and is scalable (meaning it can run not only on a single-user basis, but easily and efficiently when hundreds or thousands of people are using it simultaneously, and even when divided up onto multiple servers).

So how do you help the good programmers grow into superior programmers that can write such code?

Early on I somehow stumbled upon something that saved my career many times. I realized that I couldn't possibly know everything. Instead, I realized that a good programmer knows where to quickly find the answers.

Often programmers would come to me for help. And more times than not, I'd say, "Give me 10 minutes and I'll have the answer." Then I'd go back to my cube, quickly look up the answer, and then return. What was I doing? I was going through the same references (Web sites, books, online help) that I'd been through so many times before and finding the answer quickly. So rather than just give up and call someone else for help, I would find the answer myself. Of course, each time I learned the answer, I'd try to remember it, at least in general, so that if it came up again I would either know it or find the answer even more quickly.

Consider the earlier threading example. I mentioned it's on an ASP.NET platform. Off the top of my head, from experience, I know there's a class that allows file locking for read and writes. I can't remember the exact name of the class, but I know that it involves locking and reads and writes. And I know where the standard docs are: the MSDN online documentation or, better yet, the local copy that ships with Visual Studio, the Combined Help Collection. Or, better still, if I remember when I wrote the code before, I could just look at how I did it before. And that means I can immediately locate the name of the class when I need it.

Of course, some really confident programmers want to "roll their own" and build their own locking mechanism, for example, and skip the built-in classes. This could happen for a couple of different reasons. First, the programmers might not even know that there's an alternative to rolling their own. How could they know that there's a handy class built right into the .NET framework that handles the read and write locks? The key is using what I learned so long ago, and knowing the resources and taking a few moments to look through them before rolling your own solution. And that's where you, the manager, can help: You can require that your programmers go through the online docs and find whether the solution already exists.

But the other reason a programmer might want to roll his or her own is because he or she might think the pre-built one isn't good enough. Now remember, I'm not talking about entire systems here that are already built. I'm talking about small, individual functions and classes, the nuts and bolts of your system, such as the file locking mechanism. Remember, programmers like to build things. It's their nature. And they feel especially good if they can build something that was better than the previous one.

But also remember: The class in this case is already built, and takes just a couple of lines of code to use. And it's already been through testing at Microsoft and has been used by thousands of other programmers successfully. You know it works.

Also, programmers have a tendency (myself included) to want to add all sorts of extra features to really make something cool. For example, a file locking mechanism would be even more useful if it included built-in file caching and a queue to manage the locks, and went far and beyond the little one in the library.

But that's overkill. And the last thing you want is for your programmers to spend two weeks, a week or two days writing code when all they need to do is write the one or two lines to make use of the class Microsoft gave us (or whoever built the library you're using for your particular platform). Besides, remember that even though the programmer might be able to roll out his or her own version in a day, your testers will have to now test that code in addition, and what was a day of work could turn into a week or two weeks. Compare that with using one or two lines of code that call a pre-existing, tested class. Which, then, I ask is better? Which is the right way to do it?

Of course, there may be times where the built-in class doesn't do everything you need. In that case, you need to carefully weigh your options and tradeoffs. Is there a way to make use of the class, just without all the extra features you were hoping for? Or is there a way to build a new class that expands on the existing class? (That's usually your best option.) Only if not should you consider having your team writing their own class. But you'll want to make sure you've exhausted your options before going that route. The last thing you want is to find out six months down the road that the thousand lines of code somebody wrote are barely functioning right, and it turns out there was a pre-existing class that did exactly what you needed and would have required three lines of code on the programmer's part.

Conclusion
The moral here, then, is to make sure your programmers are familiar with the information resources, especially the online documents, as well as any existing libraries and frameworks they might have access to that have been tested many times over. Then you need to make sure that they're not rolling their own classes and components when one already exists that does the job. Finally, they need to be aware of the real issues that come up in a multiuser, high-performance system such as a Web server handling thousands or even millions of sessions a day.

19th Sep, 2008

19th Sep, 2008:
A bad day... in my life.
There is a bad story associated to it and I will write that some other time... I just wanted to make a note of it so that I dont forget it...

Sunday, September 21, 2008

Building Effective Corporate Cultures One Decency at a Time

Building Effective Corporate Cultures One Decency at a Time
By making decency a habit, leaders can surreptitiously and effectively protect a corporate culture—not just the experience of work, but also the company's moral underpinning.

By Steve Harrison

June 11, 2007 — CIO
The most basic decencies are those that demonstrate respect and consideration. A simple "hello" at the start of the day and "goodbye" at the end of the day are obvious but sometimes overlooked forms of consideration. Remembering the names of the people you work with regularly is equally as important as saying hello. Beyond these basics, here are some other ways to demonstrate respect and consideration.

Protect the Dignity of OthersWe choose whether we are going to build people up or diminish them. This choice is very poignant especially during a downsizing. It's up to those of us at the top to protect the dignity of each and every person who has to be separated. Sometimes, the choices are much less public, but no less telling. Think about how much information you have about people in your organization. Resist the temptation to gossip or break confidences.

Don't Keep People WaitingEarly in my career, I thought that letting the salespeople calling on me "cool their heels" was acceptable. I was the customer, after all. A thoughtful supervisor set me straight. Since that correction, I have never consciously kept a visitor, including a salesperson, waiting. Receiving people promptly is a decency that counts because it is courteous and respectful.

Make Meetings DecentFor meetings you call, be the first to arrive and the last to leave. Leave the Blackberry behind. Rearrange seating to assure that everyone is included and groups are not set in opposition. Take time for introductions. Make space for quiet colleagues to offer their opinions. Finish on time or, for greatest effect, finish early.

Recognition DecenciesThe Golden Rule, "Do unto others as you would have others do unto you," is a valuable guideline in life, but when it comes to recognizing employees, I suggest applying the Platinum Rule: Do unto others as they would have you do unto them. Outside of formal recognition and reward programs, here are some well-received ways to recognize people day after day.
· Say "thank you." Hardly anyone will dispute the value of saying thank you, but in many work places, the rush of deadlines crowds out appreciation. It's best to offer thanks personally and in front of peers. "Thank you" means even more when the thought is delivered in writing. While it's tempting to send off an e-mail instead of taking the time to find a note card and address an envelope, it will mean a lot more on paper.
· Little things mean a lot. Bring in coffee, donuts and snacks to share on an unpredictable basis. Or order a pizza or a huge submarine sandwich for a communal lunch. Don't make a big deal of it, but just say it's a token of how much you appreciate how hard everyone is working.
· Appoint a proxy. Invite a subordinate to represent you at conferences or meetings. If you select carefully, the associate will get a psychic kick out of representing you. He or she will feel your trust. Later, the employee can share insights gained with team members, giving a second boost of recognition.
Listening DecenciesNext to physical survival, the greatest need of a human being is to be acknowledged. "Attention must be paid!" says Willy Loman in Death of a Salesman. Everyone yearns to be understood, to be affirmed, to be validated and to be appreciated. Being listened to is the prerequisite for all of these. Most of us pay little attention to the quality of our listening. Especially in business situations, we are too busy thinking about what we call "the big picture" to notice that big pictures are the sum of personal moments of truth. Here are some ways you can practice listening decencies.
· Talk less. It's really that easy . . . and that hard. Listening starts when we stop talking. Some tricks to change the balance are:
· Stop talking after 60 seconds and give the other person a chance to chime in;
· Resist the temptation to interrupt—even if it's to agree with the person talking. When you do, you are inadvertently making the conversation about you;
· Value silence as a chance for the other person to gather their thoughts.
· Voice questions, not opinions or decisions. As a leader, stating your opinion can immediately shut down the conversation. To get the most of your diverse team, ask open-ended questions, or say, "I wonder what would happen if . . ." Then be quiet, and listen. Hold back from judgment, from expressing objections and from giving advice.
· Don't multitask. We all need to be efficient. But you can't truly listen to someone and do anything else at the same time. Try focusing on listening for just 10 minutes. You'll learn more and make the other person feel more valued.
Executive Humility DecenciesI first heard the term executive pomposity decades ago, and I have come to believe that a sense of entitlement bred from authority is by far the most corrosive agent in organizations. All this attitude does is distance executives from their colleagues and customers, and, ultimately, from their business. As much as it might inflate executive egos, pomposity deflates others around them. You'll do both yourself and your organization a favor by avoiding anything that smacks of RHIP, or "rank has its privileges," like exclusive dining rooms or parking places. Some other decencies you can practice are:
· Share the credit, hoard the blame. When things go well, share the credit. When things go badly, be known as someone who is accountable. There will be time to sort out the problem and learn from it later. Be known as someone whose first instinct is to fix the problem rather than affix the blame.
· If you make a mistake, apologize. Far from diminishing your importance, an apology demonstrates humility, respect for others and a desire to learn, all of which are traits of strong leaders. Refusing to apologize after having made a mistake demonstrates pomposity of the worst kind. Saying "I'm sorry" effectively is one of the most powerful small decencies available to any leader. Good apologies deliver the "4 Rs:"
· Recognition of the mistake
· Responsibility for the error
· Remorse expressed
· Restitution offered
· Make yourself accessible. In his book The Transparent Leader, former Dial Corporation CEO Herb Baum says, "The road to transparency is itself an open one. . . I stress actual physical accessibility as a tool to develop our culture." One way he became accessible was through a program called "Hotdogs with Herb." He describes this as "a fun, casual lunch where I get to spend quality time with a small group of employees . . . It allows them to get to know me, and gives me a chance to get to know them and listen to their concerns or feedback . . . We always have hotdogs—my favorite dish." Former New York City mayor Ed Koch was known for asking city employees, "How am I doing?" and really being open to the answer. In the end, accessibility is not just about being available, it's about being open to input as well.
· Ripples in a PondPeople will perk up when you offer a decency. Employees like to work in a place where consideration and respect are palpable and leaders listen with humility. And I think they like to use a leader's example as permission to make the extra effort to act with decency themselves. That's how one leader's commitment to decency emanates throughout a company like ripples in a pond. It's quiet. It may take a little while. But it will bring about a change that is deeply rooted within individual behavior, and that's the best foundation of all.
· Steve Harrison is chairman of Lee Hecht Harrison, a global leader in career management solutions based in Woodcliff Lake, NJ. This article is drawn from his new book, The Manager's Book of Decencies: How Small Gestures Build Great Companies (McGraw-Hill, 2007). Harrison welcomes examples of decencies and gives a free book each month to the person submitting the most powerful example of a business decency. For more on the book or to submit your decencies, visit http://www.bookofdecencies.com.
· © 2008 CXO Media Inc.

read it... only....If you're a nice person

From: www.cio.com
The Danger of Being Too Nice at Work – Meridith Levinson, CIO
September 18, 2008

If you're a nice person, you probably think that being nice works to your advantage in the office. After all, how could it be any other way? Genuinely nice people are well liked. They're generally easy to work with. They care about others and tend to have good values. In a fair and just world, that sort of behavior should be rewarded. Right?
Not necessarily. Too often, nice, competent people get passed up for promotions. Instead, the plum job goes to the prima donna or the person who plays politics. The bonus is bestowed upon the squeaky wheel or the obnoxious go-getter. In this environment, the nice guy really does finish last. It's frustrating because it goes against everything we were taught as a children about the Golden Rule.
RELATED STORIESHow to Make Nice Playing Nice in the IT Sandbox CIOs Need to be Tough Yet Empathetic How to Make a Tough Decision What nice people may not realize is that they're too nice, and that being too nice can seriously stymie their career growth and success, says Russ Edelman, a SharePoint consultant and co-author of the book, Nice Guys Can Get the Corner Office: Eight Strategies for Winning in Business Without Being a Jerk (Portfolio, 2008.) "The people in business who suffer from nice guy syndrome are not achieving their true potential," he says.
The problem with being too nice, according to Edelman—who comes off as a very nice guy—is that you're a doormat and people take advantage of you. Nice people are too concerned about pleasing others and not making waves that they don't stand up for themselves.
Edelman cites a nice man he interviewed for his book, who was vying for an executive position. The nice man was well-respected and well-liked in his company, and had a very good shot at the job. Of course, someone else was competing for the position. When the nice man was asked in an interview about his competitor, according to Edelman the nice guy said he thought his competitor would do a fantastic job. The nice contender wound up writing a letter of recommendation for his competitor because he didn't want to cause a stir by vying for the executive-level job, says Edelman. End result: The competitor got the job, and the nice guy remained in his spot on the corporate ladder.
"The nice guy is forever putting the oxygen mask on someone else before putting it on himself," says Edelman.
The Cost of Nice in BusinessBeing too nice is not just a problem for individuals. It's a problem for businesses, too. Employees who are too nice cost businesses time and money.
In a survey of 50 CEOs, Edelman asked about the impact of "being too nice" on their businesses. The CEOs responded by saying that being too nice cost them eight percent of their gross revenues. In other words, if the CEOs' companies had been more aggressive, they believed they could have earned more money.
Edelman notes that managers who are too nice are reluctant to make decisions on their own. They fear hurting the feelings of anyone whom they don't ask for feedback, so they include everyone in their decision-making. That wastes time and can lead to missed opportunities.
"The overly nice guy usually defers to others. They're reluctant to create losers," says Edelman. The irony is that in the process of trying to make everyone a winner, the nice guy ends up the loser.
Managers who are too nice also avoid confrontation, says Edelman. They'd rather ignore problems than address them head on. Of course, ignoring problems only makes them worse, and burying one's head in the sand does not inspire the confidence of the manager's team or of his superiors, adds Edelman. It only inspires their ire.
"If you appease everyone, if you fear hurting people's feelings, you do a disserve to whatever project you're working on, to yourself and your business," says Edelman. "That's where being too nice is not nice at all."
Advice for People Who Are Too NiceSofties need to toughen up, says Edelman. "I'm not advocating that people become jerks or SOBs," he says, "But they need to find a balance to stay true to their nice nature while also being appropriately assertive and protecting their interests."
The challenge, then, for nice people is to redefine what it means to be nice, says Edelman, and to understand that being nice doesn't have to mean being a doormat. You can be nice and be assertive and deal with confrontation and set boundaries, he adds.
Here are three concepts nice people need to understand to succeed at work:
1. Business is competitive. Deal with it. Edelman interviewed Sam DiPiazza Jr., the CEO of PricewaterhouseCoopers, for his book. DiPiazza had this to say about business, according to Edelman: "Business, whether we like it or not, includes competition. It's challenging, aggressive and very demanding. Despite the perception of many, it can also be performed nicely."
2. Sometimes being nice isn't very nice at all.Edelman also spoke with the CEO of the American Cancer Society, John Seffrin, who believes that when mangers are too nice and are incapable of having honest discussions with others (such as during a performance review) for fear of hurting feelings, they're in fact not being nice at all and they're doing a disservice to the people they manage.
3. Confrontation is not necessarily a bad thing. Nice people avoid confrontation because it's uncomfortable, says Edelman. If nice people are to be more assertive, they need to understand the business value of confrontation: it allows them to solve problems. Edelman points to a strategy employed by 1-800-GOT-JUNK CEO Brian Scudamore, which Scudamore calls "race to the conflict." The idea is, if a conflict or issue comes up, employees should race to it to get it resolved as quickly as possible. If they don't, they're wasting time.
© 2008 CXO Media Inc.

Wednesday, September 10, 2008

Microsoft joins OMG

Company pushes software modeling initiatives and plans to assist with the evolution of standards

By Paul Krill, IDG News Service


September 10, 2008

As part of its strategy for model-driven software development, Microsoft on Wednesday announced it has joined the Object Management Group (OMG).

OMG standards have included UML (Unified Modeling Language) and BPMN (Business Process Modeling Notation). Microsoft plans to take an active role in OMG working groups and contribute to an industry dialogue and assist with the evolution of standards, the company said. Microsoft is now working with the OMG finance working group on information models for insurance business functions related to the property and casualty industry.

"We think OMG is important to help contribute to the open industry dialogue. Modeling has been something that has really been viewed as sort of a niche," said Burley Kawaski, director of product management for the Microsoft Connected Systems Division.

Microsoft, meanwhile, has been developing its own modeling initiatives, including Oslo, for model-driven software development, and its Visual Studio Rosario release.

The company has not been a supporter of UML, instead deferring to third parties to provide plug-ins offering UML support to developers. But with Rosario, the company will add support for UML 2.1.1. "For certain communities, the UML support is very important," Kawasaki said. Microsoft currently has no target release date for Rosario but previously has offered up late-2008 as an estimate.

Modeling has been viewed as a means to break down technology and role silos in application development and assist IT departments with offering more effective business strategies, Microsoft said. But modeling has failed to have a mainstream impact on how organizations develop and manage core applications, the company said.

"Many people have tried modeling many times and failed," said Kawasaki. "We think there is a much broader use of modeling that has much greater potential."

The company believes models must evolve to be more than static diagrams defining a software system. Implementing models as part of the design, deployment and management process would give organizations a deeper way to define and communicate aspects involved in the application lifecycle.

Putting model-driven development into Microsoft's .Net platform will provide organizations with visibility and control over applications, according to Microsoft.

Microsoft views model-driven technologies as a main pillar of its "Dynamic IT" vision for aligning business and IT. Other pillars include service enablement, virtualization, and the user experience.

In addition to UML backing, Microsoft plans to support BPMN in Oslo and its Visio drawing and modeling tool.

Wednesday, February 20, 2008

Internal Coding Guidelines

Table of Contents



1. Introduction 1



2. Style Guidelines 2



2.1 Tabs &
Indenting 2



2.2 Bracing 2



2.3 Commenting 2



2.3.1 Documentation Comments... 2





2.3.2 Comment Style............ 3





2.4 Spacing 3



2.5 Naming 4



2.6 Naming
Conventions 4



2.6.1 Interop Classes........ 4





2.7 File
Organization 5





1. Introduction



First, read the .NET Framework Design
Guidelines. Almost all naming conventions, casing rules, etc., are spelled out
in this document. Unlike the Design Guidelines document, you should treat this
document as a set of suggested guidelines. These generally do not effect the customer
view so they are not required.



2. Style Guidelines



2.1 Tabs & Indenting



Tab characters (\0x09) should not be used
in code. All indentation should be done with 4 space characters.



2.2 Bracing



Open braces should always be at the
beginning of the line after the statement that begins the block. Contents of
the brace should be indented by 4 spaces. For example:



if (someExpression)

{

DoSomething();

}

else

{

DoSomethingElse();

}



“case” statements should be indented from
the switch statement like this:



switch
(someExpression)

{



case 0:

DoSomething();

break;



case 1:

DoSomethingElse();

break;



case 2:

{

int n = 1;

DoAnotherThing(n);

}

break;

}



Braces should never be considered
optional. Even for single statement blocks, you should always use braces. This
increases code readability and maintainability.



for (int i=0;
i<100; i++) { DoSomething(i); }



2.3 Single line statements



Single line statements can have braces
that begin and end on the same line.



public class Foo

{

int bar;



public int Bar

{

get { return bar; }

set { bar = value; }

}



}



It is suggested
that all control structures (if, while, for, etc.) use braces, but it is not
required.



2.4 Commenting



Comments should be used to describe
intention, algorithmic overview, and/or logical flow. It would be ideal, if from reading the comments alone, someone
other than the author could understand a function’s intended behavior and
general operation. While there are no minimum comment requirements and
certainly some very small routines need no commenting at all, it is hoped that
most routines will have comments reflecting the programmer’s intent and
approach.



2.4.1 Copyright notice



Each file should start with a copyright
notice. To avoid errors in doc comment builds, you don’t want to use
triple-slash doc comments, but using XML makes the comments easy to replace in
the future. Final text will vary by product (you should contact legal for the
exact text), but should be similar to:



//-----------------------------------------------------------------------

// <copyright file="ContainerControl.cs"
company="Microsoft">

// Copyright (c) Microsoft
Corporation. All rights reserved.

// </copyright>

//-----------------------------------------------------------------------



2.4.2 Documentation Comments



All methods should use XML doc comments.
For internal dev comments, the <devdoc> tag should be used.



public class Foo

{



/// <summary>Public stuff about the method</summary>

/// <param name=”bar”>What a neat parameter!</param>

/// <devdoc>Cool internal stuff!</devdoc>

///

public void MyMethod(int bar) { … }



}



However, it is common that you would want
to move the XML documentation to an external file – for that, use the
<include> tag.



public class Foo

{



/// <include file='doc\Foo.uex'
path='docs/doc[@for="Foo.MyMethod"]/*' />

///

public void MyMethod(int bar)
{ … }



}



UNDONE§ there is a big doc with all the
comment tags we should be using… where is that?



2.4.3 Comment Style



The // (two slashes)
style of comment tags should be used in most situations.
Where ever possible, place comments above the code
instead of beside it.
Here are some examples:



// This is required for WebClient to work through the proxy

GlobalProxySelection.Select = new
WebProxy("http://itgproxy");








// Create object to access Internet resources

//

WebClient myClient = new WebClient();



Comments can be placed at the end of a
line when space allows:



public class SomethingUseful

{

private
int itemHash; // instance member


private
static bool hasDoneSomething; // static member


}



2.5 Spacing



Spaces improve readability by decreasing
code density. Here are some guidelines for the use of space characters within
code:




  • Do use a single space after a comma between function arguments.

    Right:
    Console.In.Read(myChar, 0, 1);

    Wrong: Console.In.Read(myChar,0,1);

  • Do not use a space after the parenthesis and
    function arguments

    Right:
    CreateFoo(myChar, 0, 1)

    Wrong: CreateFoo( myChar, 0, 1 )

  • Do not use spaces between a function name and
    parenthesis.

    Right:
    CreateFoo()

    Wrong: CreateFoo ()

  • Do not use spaces inside brackets.

    Right
    : x = dataArray[index];

    Wrong: x = dataArray[ index ];

  • Do use a single space before flow control statements

    Right:
    while (x == y)

    Wrong: while(x==y)

  • Do use a single space before and after comparison operators

    Right:
    if (x == y)

    Wrong: if (x==y)



2.6 Naming



Follow all .NET Framework Design
Guidelines for both internal and external members. Highlights of these include:




  • Do not use Hungarian notation

  • Do not use a prefix for member variables
    (_, m_, s_, etc.). If you want to distinguish between local and member
    variables you should use “this.” in C# and “Me.
    in VB.NET.

  • Do use camelCasing for member variables

  • Do use camelCasing for parameters

  • Do use camelCasing for local variables

  • Do use PascalCasing for function,
    property, event, and class names

  • Do prefix interfaces names with “I”

  • Do not prefix enums, classes, or delegates
    with any letter



The reasons to extend the public rules (no
Hungarian, no prefix for member variables, etc.) is to produce a consistent
source code appearance. In addition a goal is to have clean readable source.
Code legibility should be a primary goal.



2.7 Naming Conventions



2.7.1 Interop Classes



Classes that are there for interop
wrappers (DllImport statements) should follow the naming convention below:




  • NativeMethods – No suppress unmanaged code
    attribute, these are methods that can be used anywhere because a stack
    walk will be performed.

  • UnsafeNativeMethods – Has suppress unmanaged code
    attribute. These methods are potentially dangerous and any caller of these methods must do a
    full security review to ensure that the usage is safe and protected as no
    stack walk will be performed.

  • SafeNativeMethods – Has suppress unmanaged code
    attribute. These methods are safe and can be used fairly safely and the
    caller isn’t needed to do full security reviews even though no stack walk
    will be performed.



class NativeMethods

{

private NativeMethods() {}





[DllImport(“user32”)]

internal static extern void
FormatHardDrive(string driveName);

}



[SuppressUnmanagedCode]

class UnsafeNativeMethods

{

private UnsafeNativeMethods()
{}



[DllImport(“user32”)]

internal static extern void
CreateFile(string fileName);

}



[SuppressUnmanagedCode]

class SafeNativeMethods

{

private SafeNativeMethods() {}



[DllImport(“user32”)]

internal static extern void
MessageBox(string text);

}



All interop classes must be private, and all methods must be internal. In addition a private constructor should be provided to
prevent instantiation.



2.8 File Organization




  • Source files should contain only one public
    type, although multiple internal classes are allowed

  • Source files should be given the name of the
    public class in the file

  • Directory names should follow the namespace
    for the class



For example, I would expect to find the
public class “System.Windows.Forms.Control” in
“System\Windows\Forms\Control.cs”…




  • Classes member should be alphabetized, and grouped into sections (Fields, Constructors,
    Properties, Events, Methods, Private interface implementations, Nested
    types)

  • Using statements should be inside the
    namespace declaration.



namespace MyNamespace

{



using System;



public class MyClass : IFoo

{



//
fields

int foo;



//
constructors

public MyClass() { … }



//
properties

public int Foo { get { … } set { … } }



//
events

public event EventHandler FooChanged { add { … } remove {
… } }



//
methods

void DoSomething() { … }

void FindSomethind() { … }



//private interface implementations

void IFoo.DoSomething() {
DoSomething(); }



// nested types

class NestedType { … }



}



}





Monday, February 11, 2008

Speeding Up Web Page Loading

Speeding Up Web Page Loading - Part I (1)


As more and more businesses go online, just having a web presence is no longer enough to succeed. It takes a reliable, high-performance Web site that loads quickly too. After all, nothing makes an Internet user leave a site quicker than having to wait ages for a web page to load.

A previous post briefly identified the factors that determine how fast (or slow) your web pages load, namely:

* Size (of your web page)

* Connectivity (quality of your host's network connections and bandwidth)

* Number (of sites sharing your server).

This article will now discuss ways that webmasters can ensure their sites' pages load quickly and efficiently, by focusing on the first factor.

File size - the total of the file sizes of all the parts of your web page (graphics, music file, html, etc.) should be small enough to download quickly. A reasonably fast loading page is sized at around 50 - 70Kb, with up to 120Kb for more graphics intensive pages. You can optimize your file size by:

1. Reducing page weight:
* Eliminate unnecessary whitespace (use tools like HTML Tidy to automatically strip leading whitespace and extra blank lines from valid HTML source) and comments
* Cut down on extras (buttons, graphics) and don't put a lot of graphics and big midi files on the same page
* Move webrings from your homepage to their own page
* Reduce the file size of some of your graphics (use GifBot, an on-line gif reducer at Net Mechanic)
* Redesign pages so it works over 2 different pages instead of just one
2. Reducing the number of inline scripts or /Moving them into external files - inline scripts slow down page loading since the parser must assume that an inline script can modify the page structure. You can:
* Reduce the use of document.write to output content
* Use modern W3C DOM methods to manipulate page content for modern browsers rather than older approaches based on document.write
* Use modern CSS and valid markup - CSS reduces the amount of markup as well as the need for images in terms of layout. It can also replace images which are actually only images of text. Valid markups stop browsers from having to perform "error correction" when parsing the HTML and allows free use of other tools which can pre-process your web pages.
* Minimize CSS/script files for performance while keeping unrelated CSS/scripts in separate files for maintenance
* Use External HTML Loading - involves using an IFrame for Internet Explorer and Netscape 6, and then shifting that content via innerHTML over to a
tag. Benefits: keeps initial load times down to a minimum and provides a way to easily manage your content. Downside: we have to load content along with all the interface elements, which can severely impair the user experience of the page. A tutorial on externally loading HTML can be found here.
3. Minimizing the number of files referenced in a web page to lower the number of HTTP connections required to download a page
4. Reducing domain lookups (since each separate domain costs time in a DNS lookup) - be careful to use only the minimum number of different domains in your pages as is possible
5. Chunking your content - the size of the full page is less important if the user can quickly start acting on some information. How?
* Replace table-based layout with divs
* Break tables into smaller ones that can be displayed without having to download the entire page's content
o Avoid nesting tables
o Keep tables short
o Avoid using tables to lay out elements on the page
o Exploit several coding techniques:
+ split the page layout into multiple independent tables to preserve the browsers' ability to render each of them step-by-step (use either vertically stacked or horizontally stacked tables)
+ use floating tables or regular HTML codes that flow around the floating objects
+ use the fixed table-layout CSS attribute
* Order page components optimally - successive transmission of the DHTML code enables the browser to render the page during loading
o download page content first (so users get the quickest apparent response for page loading) along with any CSS or script required for its display;
o disable any DHTML features that require the page to complete loading before being used initially and only enable it after the page loads;
o allow the DHTML scripts to be loaded after the page contents to improve the page load's overall appearance
6. Specifying image and table sizes - browsers are able to display web pages without having to reflow content if they can immediately determine the height and/or width of your images and tables
7. Using software and image compression technology
* Use tools that can "compress" JavaScript by reformatting the source or obfuscating the source and reducing long indentifiers to shorter versions
* Use mod_gzip, a compression module using the standard zlib compression library, to compress output - compressing the data being sent out from the Web server, and having the browser decompress this data on the fly reduces the amount of data sent and increases the page display speed; HTTP compression results in 150-160% performance gain (sizes of web pages can be reduced by as much as 90%, and images, up to 50%)
8. Caching previously received data/reused content - make sure that any content that can be cached is cached with appropriate expiration times since caching engines reduce page loading time and script execution by performing optimizations and various types of caching; cuts down latency by as much as 20-fold, by preventing dynamic pages from doing any repetitive work, and reducing the turnaround time for each request
9. Choosing your user agent requirements wisely - specify reasonable user agent requirements for projects; basic minimum requirements should be based upon modern browsers which support the relevant standards
The next post will focus on the other two factors, as well as other ways that webmasters can speed up their web page loading.

Speeding Up Web Page Loading - Part II (2)


In Part I, we detailed how webmasters can speed up the loading of their web pages by optimizing their file sizes. Here, some additional tips to make pages load faster will be discussed.

Another factor to consider is the speed at which the pages are served. What happens is that servers get bogged down if too many web surfers ask for the same page at the same time, resulting in a slowdown in loading speed.

Although there is no way to predict exactly how many people will visit a site at once, it is always a good idea to choose web hosting companies that tune its servers to make sure that enough computing power is given to the sites that get the most hits.

You can opt for hosts, like LyphaT Networks, that use caching and/or compression software to maximize the performance of their servers and minimize page loading times.

Another consideration is your host's connectivity or speed of Internet connection and bandwidth. Bandwidth refers to the amount of data that can be transmitted in a fixed amount of time and this actually fluctuates while you are surfing. Different users also have different access to the Internet (some might use dial-up or a dedicated T-1) so it is up to you to keep your file sizes down so that no matter who is viewing your site, they get as quick a download as possible.

Some ways you can do this is by:

* Testing your page loading time with low bandwidth emulation - you can use the mod_bandwidth module for this if you're running an Apache Web server. This module enables you to set connection bandwidth limits to better emulate typical modem speeds.
* Pinging your site (reply time should be 100 ms) and then running tracert - each hop/transient point should be less than 100ms, and if it takes longer or times out, then it could be slow at that point.

You can check your results against the table shown at the Living Internet site on the number of seconds it takes to download data of various sizes at varying speeds of Internet connections.
* Using the HTML Toolbox program at Net Mechanic, or the Web Page Analyzer - 0.82, a free web-based analyzer that calculates page size, composition and page download time.

Tracking Web Site Traffic


When you establish an online presence, you're basically after one thing, to get your message across to Internet users. You don't set up a website just so people can ignore it, do you?

Whether or not you are running mission critical ecommerce sites or online marketing campaigns, as a webmaster, you're naturally curious about your site's visitors.

But first, it is important to distinguish what kind of visitors go to your site. According to Yari McGauley, in his article Web Tracking & Visitor Stats Articles, websites get two kinds: normal visitors (people) and robots (or any kind of automatic 'web crawling' or 'spidering' programs), ranging from Search Engines, to Link and Website Availability Checkers to Spam/Email Harvesters.

So how can you find out more information about your visitors? There are a number of ways.

1. install a counter at your site - a counter simply provides an indication of the number of visitors to a particular page; usually counts hits (a hit is a single request from a browser to a server), which is not a reliable indicator of website traffic since many hits are generated by a single page visit (both for the request itself, and for each component of the page)
2. use logfiles - if your server is enabled to do it (check with your web host) then every action on the server is logged in logfiles (which are basically text files describing actions on the site); in their raw form, logfiles can be unmanageable and even misleading because of their huge size and the fact that they record every 'hit' or individual download; you need to analyze the data

There are 2 ways this can be done:
* Download the logfiles via FTP and then use a logfile analyzer to crunch the logfiles and produce nice easy to read charts and tables
* Use software that runs on the server that lets you look at the logfile data in real-time

Some log file analysers are available free from the Web (ex. Analog), though commercial analyzers tend to offer more features, and are more user-friendly, in terms of presentation (ex. Wusage, WebTrends, Sane Solutions' NetTracker, WebTracker)
3. use a tracker - generally, each tracker will require you to insert a small block of HTML or JavaScript into each page to be tracked; gives some indication of how visitors navigate through your site: how many visitors you had (per page); when they visited; where they came from; what search engine queries they used to find your site; what factors led them to your site (links, ads etc).

Tracking tools also:
* provide activity statistics - which pages are the most popular and which the most neglected
* aggregates visitor traffic data into meaningful reports to help make website management decisions on a daily basis (ex. content updates)
4. third party analysis - services exist which offer to analyze your traffic in real time for a monthly fee; this is done by:
* placing a small section of code on any page you want to track
* information generated whenever the page is viewed is stored by the third party server
* server makes the information available in real time for viewing in charts and tables form

*OpenTracker is a live tracking system that falls somewhere between 3 and 4. You might notice, however, that tracking services will report lower traffic numbers than log files. Why? Because good tracking services use browser cookies as basis, and so, do not recognize the following factors as unique visits or human events:

* repeat unique visitors (after 24 hours)
* hits
* robot and spider traffic
* rotating IP numbers (i.e. AOL)

It also distinguishes how many unique visitors are from either: the same ISP, or corporate firewalls, large organizations. Otherwise all these users will be counted as the same visitor. Log analyzers, on the other hand, record all measurable activity and do not distinguish between human and server activities.

So why are web traffic statistics important? Because they help you fine-tune your web marketing strategy by telling you:

* Which pages are most popular, which are least used
* Who is visiting your site
* Which browsers to optimize your pages for
* Which banner ads are bringing the most visitors
* Where errors or bad links may be occurring in your pages
* Which search engines are sending you traffic
* Which keywords are used to find your site
* Which factors affect your search engine rankings and results
* Where your traffic is coming from: search engines or other web sites
* Whether your efforts to generate new customers and sales leads (such as newsletter signups and free product trials) working or not
* Which are your most common entry pages and exit pages

Broken Link Checkers


One of the basic things that webmasters need to master is the use of links. It's what makes the Internet go round, so to speak. Links are simple enough to learn and code. But sometimes, we make mistakes and end up with broken links (particularly if we're coding manually) or even dead ones (if we don't update content that often).

To an Internet user, there's nothing more frustrating than clicking on links that give nothing but error messages (alongside those pop-up ads, of course), and as a result, they may leave your site. That's not so bad if it's just a hobby site, but what if you're running e-commerce sites? Or if you're trying to get your website registered with search engines?

I know manually checking for broken/dead links can be time consuming, not to mention migraine-inducing. So what's your recourse? Automated link checkers of course! There are a number of them available online.

Here are some (latest versions), available either for free or under GPL, for your consideration:

* LinkChecker v1.12.2 - a Python script for checking your HTML documents for broken links
* Checkbot v1.75 - written in Perl; a tool to verify links on a set of HTML pages; creates a report summarizing all links that caused some kind of warning or error
* Checklinks 1.0.1 - written in Perl; checks the validity of all HTML links on a Web site; start at one or more "seed" HTML files, and recursively test all URLs found at that site; doesn't follow URLs at other sites, but checks their existence; supports SSI (SHTML files), the latest Web standards, directory aliases, and other server options
* Dead Link Check v0.4.0 - simple HTTP link checker written in Perl; can process a link cache file to hasten multiple requests (links life is time stamp enforced); initially created as an extension to Public Bookmark Generator, but can be used by itself as is
* gURLChecker v0.6.7 - written in C; a graphical web links checker for GNU/Linux and other POSIX OS; under GPL license
* JCheckLinks v0.4b - a JavaT application which validates hyperlinks in web sites; should run on any Java 1.1.7 virtual machine; licensing terms are LGPL with the main app class being GPL
* Linklint v2.3.5 - an Open Source Perl program that checks links on web sites; licensed under the Gnu General Public License
* LinkStatus v0.1.1 - written in C++; an Open Source tool for checking links in a web page; discontinued and forked and to KLinkStatus, a more powerful application for KDE (which makes it hard for Windows and Mac users to build); KLinkStatus v0.1-b1 at KDE-Apps.org
* Xenu's Link SleuthT v1.2e - a free tool that checks Web sites for broken links; displays a continuously updated list of URLs sortable by categories; Platform(s): Win 95/98/ME/NT/2000/XP
* Echelon Link Checker - a free CGI & Perl script from Echelon Design; you simply edit a few variables at the top of the script, set a url to the page you want, and it'll go to that page, get all the links, and check each link to see if its "dead" or not; allows you to set what word or words define a dead page, such as 404 or 500; Platforms: All
* Link Checker (CMD or Web v1.4) - CMD version can check approximately 170 links in about 40 seconds; CGI version takes about a minute and 10 seconds; very accurate; scans for dead links (not just 404 errors but any error that prevents the page from loading Platform(s): ALL
* phplinkchecker - a modified freeware version of the old PHP Kung Foo Link Checker; reports the status (200, 404, 401, etc.) of a link and breaks the report down showing useful stats; used for finding broken links, or working links on any page; can be easily modified for any specific use Platform(s): Unix, Windows

You can also have your URL's links checked (for free) at the following sites:

* 2bone's LinkChecker 1.2 - allows site owners to quickly and easily check the links on their pages; allows users to add their link to 2bone's links section; added (as of Jan 2004) an option to see all results returned on a single page or use the quicker 10 links per results page
* Search Engine Optimising - via its Website Broken Links Checker Platform(s): All
* Dead-Links.com - via its Free Online Broken Link Checker from Dead-Links.com; spider-based technology and super fast online analysis

With all these resources available at no cost to you, there's really no reason why you should still have those broken and dead links around.

Caching Web Site for Speed


MarketingTerms.com defines caching as the 'storage of Web files for later re-use at a point more quickly accessed by the end user,' the main objective of which is to make efficient use of resources and speed the delivery of content to the end user.

How does it work?

Well, Guy Provost offers a more detailed explanation of How Caching Works, but simply put, a web cache, situated between the origin Web servers and the client(s), works by saving for itself a copy of each HTML page, image and file (collectively known as objects), as they are requested, and uses this copy to fulfill subsequent requests for the same object(s), instead of asking the origin server for it again.

Advantages:

* if planned well, caches can help your Web site load faster and be more responsive by reducing latency - since responses for cached requests are available immediately, and closer to the client being served, there is less time for the client to get the object and display it, which will result in
* users visiting more often (since they appreciate a fast-loading site)
* can save load on your server - since there are fewer requests for a server to handle, it is taxed less and so reduces the cost and complexity of that datacenter (which is why web-hosting companies with large networks and multiple datacenters offer caching servers at various datacenters in their network; caching servers automatically update themselves when files are updated, which takes the load off the central server or cluster of servers)
* reduces traffic/bandwidth consumption - since each object is only gotten from the server once, there are fewer requests and responses that need to go over the network
* you don't have to pay for them

There are some concerns with its use, however:

* webmasters in particular fear losing control of their site, because a cache can 'hide' their users from them, making it difficult to see who's using the site
* could result in undercounts of page views and ad impressions (though this can be avoided by implementing various cache-busting techniques to better ensure that all performance statistics are accurately measured)
* danger of serving content that is out of date, or stale

There are two kinds:

* Browser Caches
o client applications built in to most web browsers
o let you set aside a section of your computer's hard disk to store objects that you've seen, just for you, and will check to make sure that the objects are fresh, usually once a session
o settings can be found in the preferences dialog of any modern browser (like Internet Explorer or Netscape)
o useful when a client hits the 'back' button to go to a page they've already seen & if you use the same navigation images throughout your site, they'll be served from the browser cache almost instantaneously
* Proxy Caches
o serve many users (clients) with cached objects from many servers
o good at reducing latency and traffic (because popular objects are requested only once, and served to a large number of clients)
o usually deployed by large companies or ISPs (often on their firewalls) that want to reduce the amount of Internet bandwidth that they use
o can happen at many places, including proxies (i.e. the user's ISP) and the user's local machine but often located near network gateways to reduce the bandwidth required over expensive dedicated internet connections
o many proxy caches are part of cache hierarchies, in which a cache can inquire from neighboring caches for a requested document to reduce the need to fetch the object directly
o although some proxy caches can be placed directly in front of a
particular server (to reduce the number of requests that the server
must handle), they are called differently (reverse cache, inverse
cache, or httpd accelerator) to reflect the fact that it caches
objects for many clients but from (usually) only one server

Hacking Attacks - Prevention


The first three steps are suggested by security consultant Jay Beale in his interview with Grant Gross, when asked how administrators can protect themselves from system attacks.

1. Harden your systems (also called "lock-down" or "security tightening") by

* Configuring necessary software for better security
* Deactivating unnecessary software - disable any daemons that aren't needed or seldom used, as they're the most vulnerable to attacks
* Configuring the base operating system for increased security

2. Patch all your systems - Intruders can gain root access through the vulnerabilities (or "holes") in your programs so keep track of "patches" and/or new versions of all the programs that you use (once the security hole is found, manufacturers usually offer patches and fixes quickly before anyone can take advantage of the holes to any large extent), and avoiding using new applications or those with previously documented vulnerabilities.

3. Install a firewall on the system, or at least on the network - Firewalls refer to either software (ex. ZoneAlarm) and/or hardware (ex. Symantec-Axent's Firewall/VPN 100 Appliance) that block network traffic coming to and leaving a system, and give permission to transmit and receive only to user-authorized software. They work at the packet level and can not only detect scan attempts but also block them.

You don't even need to spend a lot of money on this. Steve Schlesinger expounds on the merits of using open source software for a firewall in his article, Open Source Security: Better Protection at a Lower Cost.

At the very least, you should have a packet-filtering firewall as it is the quickest way to enforce security at the border to the Internet.

EPLS offers the following suggestions/services for Stopping Unauthorized Access, using firewalls:

* Tighten the Routers at your border to the Internet in terms of packets that can be admitted or let out.
* Deploy Strong Packet Filtering Firewalls in your network (either by bridge- or routing mode)
* Setup Proxy Servers for services you allow through your packet-filtering firewalls (can be client- or server-side/reverse proxy servers)
* Develop Special Custom Made Server or Internet services client and server software

4. Assess your network security and degree of exposure to the Internet. You can do this by following the suggestions made by EPLS.

* portscan your own network from outside to see the exposed services (TCP/IP service that shouldn't be exposed, such as FTP)
* run a vulnerability scanner against your servers (commercial and free scanners are available)
* monitor your network traffic (external and internal to your border firewalls)
* refer to your system log - it will reveal (unauthorized) services run on the system and hacking attempts based on format string overflow usually leave traces here
* check your firewall logs - border firewalls log all packets dropped or rejected and persistent attempts should be visible

Portmapper, NetBIOS port 137-139 and other dangerous services exposed to the Internet, should trigger some actions if you check all the above.

Also, more complex security checks will show whether your system is exposed through uncontrolled Internet Control Message Protocol (ICMP) packets or if it can be controlled as part of DDoS slaves through ICMP.

5. When using passwords don't use

* real words or combinations thereof
* numbers of significance (eg birthdates)
* similar/same password for all your accounts

6. Use encrypted connections - encryption between client and server requires that both ends support the encryption method

* don't use Telnet, POP, or FTP programs unless strongly encrypted passwords are passed over the Internet; encrypt remote shell sessions (like Telnet) if switching to other userIDs/root ID
* use SSH (instead of Telnet or FTP)
* never send sensitive information over email

7. Do not install software from little known sites - as these programs can hide "trojans"; if you have to download a program, use a checksum, typically PGP or MD5 encoded, to verify its authenticity prior to installation

8. Limit access to your server(s) - limit other users to certain areas of the filesystem or what applications they can run

9. Stop using systems that have already been compromised by hackers - reformat the hard disk(s) and re-install the operating system

10. Use Anti-Virus Software (ex. Norton Anti-Virus or McAffee) and keep your virus definitions up-to-date. Also, scan your system regularly for viruses.

Some of the ways by which Web hosting providers' Security Officers Face Challenges, are discussed by Esther M. Bauer. These include:

* looking at new products/hacks
* regularly reviewing policies/procedures
* constant monitoring of well known ports, like port 80, that are opened in firewalls
* timely installation of patches
* customized setup of servers that isolate customers from each other - "In a hosting environment the biggest threat comes from inside - the customers themselves try to break into the system or into other customers' files"
* investment in firewall, VPN devices, and other security measures, including encrypted Secure Sockets Layer (SSL) communication in the server management and account management systems
* installation of secure certificates on web sites
* purchase and deployment of products according to identified needs
* monitoring suspicious traffic patterns and based on the customer's service plan, either shunting away such traffic as bad, or handling it through a content-distribution system that spreads across the network