Most code search engines index only the identifiers, comments and such words and hence fail to return impressive results. Stanford researchers show that instead of indexing keywords, indexing subtrees lead to an effective search.
AST’s of production grade software systems end up being too large. They claim that the average number of AST nodes, even in simple university programming assignments is close to 150! While the size is scary, the structure fits the purpose. AST construction tools are easily available. That becomes an added advantage.
Grammar drives programming language syntax. Parsers leverage grammar to detect errors, extract parts of source code, do type checking, etc. Since typical programming language grammars are context-free, their derivations are inherently trees. Thus, abstract syntax trees came into existence.
While the original purpose of AST was not much beyond parsing and direct applications of parsing, researchers inspect how semantic analyses can be built over ASTs. Interesting ideas evolved such as replacing subtrees for program repair, duplicate subtrees for clone detection, and semantics preserving AST transformation for synthesizing source code.
Code search has been lagging behind on these lines to leverage the richness of AST information. The gory details of source code can now be abstracted out and meaningful subtrees can be indexed. Hope these ideas make it to production-quality webscale search engines soon.
Here are some questions to ask yourself if you want to do good research.
Key points to ponder:
- What have you done?
- What is the big theme of your research?
- How do the small pieces connect for a bigger thesis?
- Have you done one work which is cool? Why is it cool? or What is cool about it?
- How impactful is your research? Where can we apply the ideas?
- Do you have a plan? Do you know when will you be done with your research?
- What skills did you accumulate as part of your research efforts?
- Do not use too many jargons while explaining.
- How do you know you have done a good job? Experimental evaluation.
- How does your work fit into the big picture?
- Who has done what and where is the gap?
- How do you organize the literature?
- What is the scope of your research?
- Why the area of research is so cool?
- Have sufficient backup slides. I spent a lot of time on my backups.
- Keep to the flow. You may have to cut off a lot of interesting stuff. Sometimes, our mind does not allow us to do it. It may be cool to show it. I have done it and I want to talk about it. But, it just does not fit the flow. The right thing to do is to chop it off and move it to backup slides.
- Sometimes mentioning that “this is very interesting. yet, considering time, I will skim over…” might buy you more time to talk in detail 🙂
- Show lot of energy while talking.
- Golden rule: I think, above all, show confidence and joy while talking about your research. Rest will be automatically taken care of 🙂
These are not in any order. These are not exhaustive.
இன்னா செய்தாரை ஒறுத்தல் அவர் நாண
நன்னயம் செய்து விடல்
English Translation: Don’t get into tit-for-tats. When someone does anything bad to you, in return, do something good to them.
There are such great versus in Thirukkural. Thirukkural consists of 133 chapters with each chapter consisting of 10 such couplets.
Some notes on student success:
- Someone’s got to tell you money is not extremely important, at least for now.
- Success repeats.
- There is one quality in all achievers, “Gratitude”!
- Success is not bought. It is built, little by little.
- Phased repetition is more important than one time slog.
- Focus. Have a direction and stick to it.
- First 15 hours or so of study gives you an idea, does not make you an expert.
- After the first 100 hours of preparation, you will know that you know nothing. Keep patience and confidence. Stick to your direction.
- Practice makes you perfect. There is this 10 year rule. Listen to Angela Lee Duckworth.
- Mind works at a different speed. Writing perhaps slows you down enough to give your mind the time to think. So, write down important things. Revise.
- When you can no longer think, stop reading. Take a break. Aptitude is what you are training for. Not, memory.
- Most key events such as exams happen in the morning hours. So, keep your body cycle such that you are at your best during these hours.
- Find pleasure in the process of achievement, and not in the achievement itself. That way, you will have many hours of satisfaction instead of just a few moments.
Most experiments are designed on controlled corpus i.e., the precision and recall of the corpus are already known either manually or through some other means (not the same as the experimental tool/automation itself). Thus, these are smaller samples of the real corpus. An Oracle can now be implemented to compute recall. Sampling works in most cases. However, it has its own limitations too. For example, samples can suffer from a serious threat to validity. With another sample, the results could be different. Creating several large samples in several circumstances could be infeasible. Let us review some techniques followed by researchers in these contexts to compute recall.
Gold Sets or Benchmarks
Using an existing benchmark: One way to address this issue is to use a carefully selected representative dataset such as sf100 (http://www.evosuite.org/experimental-data/sf100/). While this is large and unbiased, the issue could be that this is still too large for certain recall computation tasks. Such benchmarks are also referred to as “Gold Set”. Moreover, such benchmarks are rare and specialized that these may not suit your purpose all the time.
Creating your own benchmark: In Shepherd et al.’s paper on “Using Natural Language Program Analysis to Locate and Understand Action-Oriented Concerns”, he hires a new person to prepare the gold set along with relevant results. Another person verifies the results. Both these people discuss and reconcile wherever there were disagreements. This gold set is then released to the community.
Comparative Evaluation instead of Recall
In papers such as “Improving Bug Localization using Structured Information Retrieval”, a comparative result is given instead of recall. They claim that their approach finds x% of bugs more than another tool.
If there is only one result expected. Computing MRR is more appropriate than Recall. It is easier to do so in top-10 or top-k results.
More on this …soon.
Four simple rules to keep in mind while naming your thesis are:
- Avoid redundancy.
- Title can be broader but never narrower.
- A title worth to be a survey paper will be good.
- Complete, catchy and crisp.
Following is one approach to arrive at a title:
- List down the connecting ideas that determine your work. Usually there are three to four ideas. For instance, I
- Improve code search.
- Leverage naturalness of source code.
- Use natural language descriptions around source code.
- See if any of these are too narrow. If yes, make them broader. For instance,
- “natural language description” are highly specialized form of “documentation”. In other words, documentation can be in any format. So, let us make it “Use documentation”.
- Look at survey titles in your area of research to find some naming styles. I went to google scholar and tried the query “TSE code search survey” In my case, here are some examples that I liked:
- Feature location is source code: A taxonomy and survey.
- A survey of software reuse libraries
- Exemplar: A source code search engine for finding highly relevant applications
- Comparing two methods of sending out questionnaires; E-mail versus mail
- Tracelet-based code search in executables
- … and so on
- Now, the third one looks like an extension of a single conference paper idea. So, I drop it. For the rest, I abstract and note down the styles as follows:
- X in Y: A taxonomy and survey.
- A survey of X.
- Comparing two methods of X; x1 versus x2.
- X-based Y in Z.
- X’ing Y-based applications via automated combination of Z techniques.
- Learning from X to improve Y.
- Comparison and evaluaiton of X tools and techniques: A qualitative approach.
- X based recommendation for Y.
- Effective X based on Y model.
- Exploring the X patterns of Y in Z.
- … and so on.
- Ok! There are a lot. So, let us find what type of these abstractions will suit us. Clearly, I do no comparative evaluation. So, it won’t suit me. I have to combine the key ideas of “software engineering applications”, “modeling source code”, “using documentation”, “leveraging naturalness” and “code search”. So, let us narrow down and look for such patterns:
- Leveraging documentation and exploiting the naturalness of source code in improving code search. (too long)
- Enhanced retrieval of source code by leveraging big code and big data. (too heavy – big code, big data, retrieval)
- Enhancing code search by automatically mining related documentation. (not bad but too simple).
- Improving code search using relevant documentation (much better than 3 but still simple).
- Exploiting retrieval models for analysis of source code. (sounds good)
- Models of source code to support retrieval based applications.
- Leveraging naturalness and relevant documentation in source code representations.
- Source code representations for search. (too short – misses key points)
- Improving code search using retrieval models.
- Adapting text retrieval models for analysis of source code: Benefits and Challenges.
- Note that the above step makes me think what exactly am I doing?
- There is an implied priority in the order of phrases. For example, In “Models of source code to support retrieval based applications”, the emphasis is more in modeling source code. Naturally, it is expected that the survey will cover state of the art code models. This fits my work. In “Adapting text retrieval models for analysis of source code”, it sounds like I am going to cover text retrieval models in depth, and perhaps no source code models. I do both to some extent actually!
- Let us now pick a few and think deeper. To aid our work, let’s group our ideas as perspectives.
- Perspectives on modeling source code
- Models of source code to support retrieval based applications.
- Source code representations for search.
- IR perspective
- Improving code search using retrieval models.
- Enhancing code search by automatically mining related documentation.
- Building retrieval based applications by leveraging naturalness in source code.
- Naturalness perspective
- Leveraging statistical properties of source code in improving code search.
- Leveraging statistical properties of source code in retrieval (based applications).
- Leveraging statistical properties of source code for effective code search.
- Leveraging naturalness of source code in building retrieval based applications.
- Intelligence perspective
- Knowledge discovery from Big Code and relevant documentation.
- Leveraging large scale source code repositories for building search-based applications.
- Ok! So, what should I do now? Best way to go ahead would be to discuss this with few people around and decide which one I would be most comfortable with.
PhD students often have several questions about conducting research, job opportunities after PhD, etc. Having talked to several students, professors and researchers. Here is a compilation of wisdom obtained on these lines. There is no specific right answer and there are always exceptions. So, take these with caution. Also, most of these apply to computer science, big data, data science, ML kind of background.
- Positioning: Typically, the inverted triangle approach is followed to find research gaps and select an area to focus. As an example, here’s how a colleague of mine shaped his work during his PhD: Image Analysis –> Biometrics –> Fingerprint recognition –> Latent Fingerprint Analysis. Note that there may be many ways to draw the hierarchy to reach to Latent Fingerprint Analysis. There is no rule or any right way to select one of them. However, having clarity on this hierarchy is important for few reasons:
- After PhD, how would you sell yourself? As Latent Fingerprint Analysis expert? It is too narrow to find job opportunities. How about Fingerprint Recognition expertise? Still too narrow. Our country may not have sufficient job opportunities. Much broader levels may work; but is still hard. Moreover, at much broader levels, how good are we?. So, as much as we gain depth in our research field, a solid breadth is also required. Moral: Be a domain expert, area expert and not just a problem expert.
- Finding right problems to solve. Time is too short to focus on everything.
- Dependence on Advisor: Be independent. It is your PhD. PhD is all about training you to be an independent researcher.
- PhD Training: PhD is all about training yourself for independent research. Doing high quality research requires skills in terms of:
- Area survey.
- Finding the right problem.
- Literature review.
- Problem definition.
- Solution approach.
- … all sections of the paper.
- Timing for Job Application: At least 6 months goes in the application process if you are applying to academia. Keep an eye on the requirements. xx conf papers, yy journal papers, zz TRs are important for UGC norms.
- Does brand value matter? Unfortunately, yes. Internship and post-docs at good places are probably important for this reason. Credibility of profile is very important. Good publications, a reputed post-doc, competitive skills, etc will help you.
- Why should I do internship?
- Brand value to resume.
- Learn different styles of writing, working, environment etc.
- Exposure to real world.
- Adapting to newer problems and people.
- Make contacts.
- What skill sets are you building? Develop skill sets during PhD period. In this case,
- Technical: Feature analysis, Image Segmentation, Noise removal, data enhancement, deep learning libraries, ML, general problem solving, etc.
- Managerial: Worked with other students on BTP, IP, individually, etc.
- Teaching: TA awards, etc.
- Coding: Java, Hadoop, R, etc.
- Financial: Acquiring funding – Writing research proposals.
- Networking: In the domain of work, build contacts.
- PhD in Computer Science: Implies that you can solve problems in computer science. You are not a PhD in Latent Fingerprint Analysis. Keep this in mind. Think CS, Do CS. Keep learning CS.
- Making tangible contributions: Create products, tools, proof of concepts. Publish papers. Pass competitive exams.
Where should I spend my time?
- Improve skills on which you are already good at? Or, Build new skills? Prioritize. Have a clear map based on direction you want to take in future. Needs clarity on vision.
- Keep honing your skills.
- Manage breadth and depth in parallel. Do not get bogged down too much in depth alone.
- Presentations are just a tool to communicate your ideas. Do not overspend your time on preparing ppts. Work on your skills and thinking process.
- Your research topic is “blah blah”. What else have you done apart from this? What skills do you bring? Show a flavor of breadth you bring in. Can you code?
- Analyzing a real problem. Typically, a project which the interviewer is part of, is presented in interview as a toy problem and you are tested on how you would approach such problems. In a way, this tests your “ability to think from scratch”.
- There are too many things to learn. Too little time with us. Clear thinking, good breadth, analytical skills, presence of mind, and communication skills can help you here.
- Do not over defend your work. Every work has its limitations.
More on this… soon.