I exported the comments from the Wordpress version of this blog as best I could. Perhaps sometime later I will link back to the original post.

Author Comment In Response To Submitted On
Jorge Aranda In reply to Neil Ernst. As a likely quack, I find this fine line comforting Columbus’s Heilmeyer Catechism 2016/07/20 at 12:55 am
Neil Ernst In reply to Jorge Aranda. Quack … or visionary? That’s a fine line. Columbus’s Heilmeyer Catechism 2016/07/19 at 8:34 pm
Jorge Aranda Columbus did get his proposal “reviewed,” by Portuguese, Genovese, Venetian, and Spanish experts, and it was turned down every time, on the grounds that his math was pretty wrong and the Earth was bigger than he wished. They knew (as most literate people then) that the Earth was round, and they knew roughly its size—Columbus must’ve seen to them a bit of a quack. He got funded pretty much on a whim by the Spanish crown, and they just happened to get lucky: https://en.wikipedia.org/wiki/Christopher_Columbus#Quest_for_financial_support_for_a_voyage Columbus’s Heilmeyer Catechism 2016/07/19 at 6:34 pm
Adi Prasetyo Hello thanks 13 Great Software Architecture Papers 2016/04/11 at 10:19 am
Stefan Wagner I think the first one is solved in principle. There are so many archiving options out there now that also give you a DOI. I specifically like ZENODO which is able to archive specific versions from GitHub.  The second one is really a problem. I guess we should have an opt-in mechanism in GitHub and similar platforms for participating in such studies. On Using Open Data in Software Engineering 2016/03/08 at 5:26 am
Neil Ernst In reply to Jorge Aranda. Yeah, the point here is to add the right tests, so that one uncovering (important) bugs should have been closer to the first one added. (you write buggy code?) The Marginal Utility of Testing/Refactoring/Thinking 2016/01/21 at 3:12 pm
Jorge Aranda There are diminishing returns on extra tests for sure—but still, sometimes I find that a test I previously thought almost pointless uncovers bugs, and that just reinforces my need to test as heavily as I can. The Marginal Utility of Testing/Refactoring/Thinking 2016/01/21 at 2:41 pm
Neil Ernst In reply to fabianodalpiaz. I guess it falls under ML tools … but you’re right. I really like Garm’s work though. Requirements, Agile, and Finding Errors 2015/12/22 at 12:56 pm
fabianodalpiaz Nice post! It is curious that you don’t mention NLP explicitly. In the PhD work of Garm Lucassen (e.g., http://www.staff.science.uu.nl/~dalpi001/papers/luca-dalp-werf-brin-15-re.pdf), we use simple NLP techniques to improve the quality of user stories (… we don’t impose new notations ).Dan Berry’s dumb tools paper (The Case for Dumb Requirements Engineering Tools, REFSQ’12) inspired our work: a useful automated tool for RE is one that achieves (close to?) 100% recall; the cost may be to sacrifice precision, but at least the analyst doesn’t have to recheck all the requirements.Our tool is deployed as a service and integrates with Jira/Pivotal. Integration with SonarQube as you suggest is definitely interesting! Requirements, Agile, and Finding Errors 2015/12/08 at 5:39 am
Jorge Aranda In reply to Neil Ernst. That’s true. But in your example I would say that this is what web frameworks are giving us—they are painful to set up, but on the whole worth it and an improvement over the status quo of a few years ago. However going from that to systems “generated automatically by algorithms based on well specified requirements and test cases” is a big qualitative jump, and that’s the one that makes me very sceptical. How Writing Code is Like Making Steel 2015/11/12 at 1:41 pm
Neil Ernst In reply to Jorge Aranda. I think we’re still in agreement. I just think the level of abstraction where the discernment is needed will be higher. It’s a familiar argument: we almost never hand-code assembly anymore. Why should we hand-code fairly repetitive Javascript to make a page do something a thousand others do? How Writing Code is Like Making Steel 2015/10/29 at 8:56 am
Jorge Aranda This is the odd instance where we disagree completely. I don’t discount search-based software development, but I don’t see how it can lead to the qualitative jumps in discernment that are routinely needed in software work. Funnily, when I read the title of your post, I was expecting the steel making analogy to go elsewhere: something about hardening and tempering based on the interaction of the system and the rest of the world, and I was fully prepared to agree with that analogy How Writing Code is Like Making Steel 2015/10/29 at 1:06 am
juli1 Wow – super interesting point of view. I think this is definitively an optimist point of view. Two to three years ago, I would have made fun of you. But now, considering the changes I saw over the last years, I am less skeptical. This future is exciting and scary at the same time, not really sure how to consider it. How Writing Code is Like Making Steel 2015/10/27 at 12:21 pm
Neil Ernst In reply to C. Albert. Thanks! My colleague insists this is a google/jquery bug. Thoughts from a CodeFest 2015/02/24 at 11:15 pm
C. Albert I love the map you made! I can only see the top left corner. But for only 24 hours that is some pretty good work. Thoughts from a CodeFest 2015/02/24 at 11:13 pm
Ben Burton (@bjburton) We did the same challenge! https://grub-up.herokuapp.com/ Thoughts from a CodeFest 2015/02/24 at 4:19 pm
Steve Easterbrook Is the Comic Sans really that much of a giveaway? Surely I’m not the only one who uses it to annoy the font nazis? The Gap Between User Requirements and Software Capabilities as Technical Debt 2015/01/20 at 11:00 pm
juli1 There are some guidelines for using a library in the “Hacker’s Guide to Python” (very good book by the way). The general idea is: if the framework makes a ton of work for you and is well supported by a BigCo, go for it! If you are just using a function in a library, implement it yourself and avoid the headaches of compatibility and maintenance. Example: wanna make a dynamic mobile app? Use jquery – this is supported by Microsoft, google, etc … the big players of the web! So you have a good probability it will be maintained AND supportedExample 2: want to write excel documents in Java do not take jexcelapi (http://jexcelapi.sourceforge.net/), this is done by a single guy and the last version is 4 years old! Take Apache POI that is active and still under development (and supported by a foundation).My 2 cents: use common sense to architecture your software and weight pros/cons to rely on somebody else. Frameworks, libraries, and dependencies 2015/01/20 at 8:18 am
Neil Ernst In reply to fede_luppi. It’s been a while, so things may have changed, but if header levels don’t work you might want to try a bulk search and replace in the resulting Latex file (e.g. s/chapter/section/ & s/section/subsection. Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2014/09/24 at 7:45 am
fede_luppi When I compile following this workflow, my resulting pdf is structured as chapters. I want a research paper, so I do not want my first levels to be named as chapter, but rather just with numbers (I, II, III,…). I tried with different base header levels in the meta-data, with no result. What else can I do? Thanks Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2014/09/17 at 1:05 pm
mircealungu The text is interrupted after: “Partly is this because…”. Is this an example of CMS failure? Software research shouldn’t be about the tools 2014/06/13 at 10:38 am
Ale I found out here http://tex.stackexchange.com/questions/91522/how-can-i-get-my-latex-set-up-on-emacs-auctex-to-use-the-file-line-error-option that it is possible to modify directly the command inside emacs and it works! Forcing AucTex to properly show error messages 2013/06/07 at 7:45 am
irwinhkwan This is a very good list and one that aligns primarily with my experience as well, though in my case I didn’t continue with my Ph.D work extensively and instead extended and picked up much of the work being done in the new group that I was working with. I think this depends highly on the group you are joining. Some Advice on Doing a PostDoc in Software Engineering 2013/06/03 at 5:06 pm
Julius Davies Informative and reasonable! Some Advice on Doing a PostDoc in Software Engineering 2013/05/23 at 4:30 pm
Neil I think it is probably a challenge on every project, agile or not – certainly that is what we hear from various clients we have at SEI. Prioritization is poorly understood – some people in the Lean world call it waste, but at some point you have to decide what you are going to implement, and that to me is prioritization. Maya Daneva does interesting work on prioritization. The fuzzy notion of “business value” 2013/03/14 at 5:40 am
Eric Knauss I especially like 5: In my opinion, a motivated team, being passionate about what they are doing, ranges among the top success factors. Do you have any information why (2-5) should be easier to make visible in non-agile projects? Just curious. For example, on the one hand I assume that (3) could be difficult, because you would not implement for future technical challenges (YAGNI). On the other hand, plan for change enables organizations to move quickly, when a new opportunity arises. Any thoughts? The fuzzy notion of “business value” 2013/03/13 at 8:06 pm
Chris Parnin (@chrisparnin) Next semester, the first homework assignment is to add a new feature to last year’s projects. Teaching Advanced Software Engineering 2013/01/25 at 3:19 pm
Ebioman In reply to Ebioman. Nevermind found it somewhere else: https://github.com/neilernst/misc/blob/master/abbrvnat-nourl.bst More helpful LaTeX tips 2013/01/09 at 2:03 am
Ebioman A shame that the file is gone – was really looking for something similar … More helpful LaTeX tips 2013/01/09 at 2:01 am
Arber Borici (@ArberBorix) Thanks for pointing it out. This is particularly true for two other classes of problems PhD researchers encounter: undecidable and hard. I’m often puzzled to hear of PhD students trying to come up with solutions to problems which the Halting problem is reducible to. Also, instead of spending a few hours thinking whether another problem is hard or not, they immediately jump into finding a solution. Which turns out to be frustrating after a few months of failed attempts :)… A stitch in time… 2013/01/06 at 2:33 pm
Asfahaan Mirza WoW this looks pretty complicated. can you please create a screencast with these steps? I am using Scrivener and Mendeley. Struggling to make it work. Some notes on integrating Mendeley, Scrivener, MultiMarkdown and (Xe)Latex 2012/09/19 at 6:24 am
Neil In reply to JLawrence. The best way is probably to use the comment notation to pass the image code through as plain Latex (ie. uninterpreted by Markdown compiler). To be honest, once the bulk of the text is written and converted, I often found it simpler to use Latexian or Emacs etc. to do these finicky edits. Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2012/08/14 at 9:57 am
JLawrence One thing that is not clear for me is how to insert figures and get them properly numbered/captioned (incl. after several revisions). I tried this workflow once, but it was so hard to understand, especially when I had to use many packages like mchem, etc. Can a master here provide a step-by-step guide to improve Scrivener-Latex workflow ? My Scrivener 2 has been biting the dust since I bought Pages and Latexian… Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2012/08/13 at 7:43 pm
Neil In reply to Dan Griffin. Thanks for visiting, Dan. I have tried out JazzHub, and it is a good option. I still think, though, that RTC is just too much tool for students to get used to. These course projects often only last maybe 8-10 weeks. Using GitHub for 3rd Year Software Engineering 2012/08/13 at 2:20 pm
Dan Griffin Hi Neil. I just ran across your blog. Thanks for your comments about RTC. I’m on the marketing team for IBM Rational and work quite a bit with universities. Are you aware of our JazzHub offering (hub.jazz.net)? JazzHub removes the requirement for a locally installed server and allows the students to quickly setup projects in the cloud. I’d love to talk to you about it and show you a demo. Also, you will be happy to know that one of the new features we added in RTC 4.0 is windows shell integration. This would allow your students to use whatever IDE they want — with no RTC integration required. Then they could use windows explorer to sync their files, much like you describe with Git above. Here is a blog discussing that feature: https://jazz.net/blog/index.php/2012/02/06/introducing-the-rational-team-concert-shell-integration-for-windows-explorer/I’d also be very curious to hear more of the comments you get on your course feedback — as those are exactly the types of things we need to know.–Dan Using GitHub for 3rd Year Software Engineering 2012/08/13 at 1:05 pm
Jorge Aranda I’ve been hearing about Sassen’s keynote a lot, but I hadn’t found anyone who could explain it to me, or even what it was roughly about. So thanks. It sounds like it was actually quite interesting, though it’s also obvious from other people’s comments that Sassen did not make it easy for this community to understand her. ICSE 2012 Thoughts (1): Saskia Sassen Keynote 2012/06/25 at 3:45 pm
Neil In reply to Adrian. This is the Cisco client from the UBC site. I tried the Mac native client but no luck on my end. Using iCloud and Cisco VPN 2012/06/21 at 4:23 pm
Kambria She made reference to a phenomenon she called “barefoot engineers”, people who, post-Communism, set up rudimentary technologies like utilities outside of the traditional structures.…In health care we call these people “positive deviants” I really like this approach to solving problems: http://www.positivedeviance.org/ ICSE 2012 Thoughts (1): Saskia Sassen Keynote 2012/06/21 at 2:35 pm
Adrian Are you using the Cisco client or the built in VPN client of OSX? Using iCloud and Cisco VPN 2012/06/20 at 4:16 pm
John Hunter When metrics are used to aid learning they can be beneficial. When they are used to set goals tied to bonuses or indirectly tied to money via performance appraisals… they mess things up. The focus turns to meeting the number not doing the job. http://management. curiouscatblog.net/2004/08/29/dangers-of-forgetting-proxy-nature-of-data/ Even when metrics are used for learning they can lead to all sorts of trouble when there is not an understanding of variation. Thankfully software developers are more likely to have a basic idea of variation than MBAs, but it is still a big problem. http://management.curiouscatblog.net/2006/05/09/understanding-data/ I very much like “Let us stipulate that there are endless examples of low-maturity teams out there whom no technique will help.” DeMarco and “Cannot control what you cannot measure” 2012/04/28 at 8:56 pm
Neil In reply to planetpolly. No, no data to hand, although the standard approach (Google web search comparison) shows 245M results for Git and 3M for “IBM RTC” (yeah, 7 years of PhD and this is the best I can do. I feel the same as you re: OSS. In general, my hunch is that employers would like someone who at least understands version control principles. Specific tool practices can be taught on the job, I think. Using GitHub for 3rd Year Software Engineering 2012/04/26 at 2:35 pm
planetpolly Our software team transitioned about a year ago from SVN to Git / Github. Thinking about it from a teaching perspective, I would way rather have a separate version control tool, outside the IDE, to start with – it makes it clear what the tool is doing, and then when a student is later shown IDE integration, they ‘get’ that is is making their lives easier, but they better understand the underlying mechanism. Otherwise it’s easy to memorize the ‘steps’ to get code checked in without really understanding the underlying version control model…Do you have any info on the industry adoption of these tools? Teaching students how to use git and the underlying models seems like a valuable skill. In general, I support teaching students using open-source tools and leaving the big expensive tools for when / if they get that kind of firepower in a workplace Using GitHub for 3rd Year Software Engineering 2012/04/26 at 12:50 pm
Kambria Isn’t the difficult thing for humans, the unknown? Can you look at the evolution of code and see what percentage changes or is add/deleted in coming releases to identify a baseline of expectations. If people understood and could factor in how much the anticipated cost will be, knowing that there will be changes that could be huge. Requirements tools and tasks 2012/03/28 at 2:27 pm
Neil In reply to Weldon Bonnell. I passed them through with comment tags. But if you use the Markdown reference syntax, I think it will translate into the correct Latex: see http://daringfireball.net/projects/markdown/syntax#link. Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2012/03/15 at 12:17 pm
Weldon Bonnell Thanks for writing about your experience. Still confused about the use of the \ref command. Is there some syntax for MMD3 to generate the \ref in LaTeX automatically or after all this are you still passing them through with the ⟨!– –⟩ ? Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2012/03/15 at 12:06 pm
Tim Brandes Thanks for the article. Thanks jan gerben for the MMD3 explanation. Based on this, I’ve written an article covering the topic, too: http://timbrandes.com/blog/2012/02/28/howto-write-your-thesis-in-latex-using-scrivener-2-multimarkdown-3-and-bibdesk/ Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2012/02/28 at 2:51 pm
Neil In reply to Jorge Aranda. Thanks Jorge. I agree with your critique. I suspect at least one reason is simply information overload. I find it difficult enough to stay on top of my own field; trying to determine how other disciplines might be of use seems overwhelming. Doesn’t it come back to the fact that true inter/trans-disciplinarity is incredibly difficult to organize and succeed at? Case studies and grounded theory in software engineering 2012/02/23 at 8:40 pm
Jorge Aranda This is a very nice post, Neil. Incidentally, one of the problems I have with grounded theory is its disregard for current and valid theory. You’re supposed to start with a blank slate, and let a theory arise from your data. But for most of the questions we’re interested in, there’s already plenty of theories (sociological, psychological, organizational) that, with slight modifications, should be applicable in our field. Why ignore them? Case studies and grounded theory in software engineering 2012/02/23 at 8:02 pm
Marius Hofert okay, it works, I had to use “file_line_error_style=t” so with ending _style and with underscores. Forcing AucTex to properly show error messages 2012/02/17 at 11:55 am
Marius Hofert This problem is also mentioned here: http://stackoverflow.com/questions/7885853/emacs-latexmk-function-throws-me-into-an-empty-buffer I followed this advice and also tried your solution but it did not work for me (with Gnu Emacs 24 on Mac OS X 10.7.3 and with AUCTeX 11.86)
Forcing AucTex to properly show error messages 2012/02/17 at 11:03 am
Christine In reply to Sam. HAHAHA! This will reoanste with me. The “deathtrap” is the perfect term for that situation. And you’re right, it’s scary as hell without brakes! Pointless: Bike lanes downtown 2012/02/10 at 8:06 am
gerben jan hi,For installation of MMD3 in scrivener you should add that one needs to install the MultiMarkDown-Support package! This changes the (on mac) ~/Library/Application Support/Multimarkdown/bin just figured this out so thought I should mention it Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2012/01/19 at 10:22 am
Jorge Aranda I’m glad you wrote to the President, Neil. The comments in his blog post are worth reading, too, and quite an embarrassment for the ACM’s position. The Research Works Act 2012/01/16 at 5:10 pm
Neil In reply to Stephan Lewandowsky. Exactly. Not ideal but it works. I don’t know about Windows, but I wouldn’t be surprised, since there isn’t anything Windows specific involved. Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2011/11/16 at 4:10 pm
Stephan Lewandowsky righto, thanks, that clarifies it. so i presume the \ref commands are likewise just embedded in the scrivener text so you can refer to figures and so on? final question: do you know if mmd3.2 for windows integrates equally seamlessly with scrivener? Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2011/11/16 at 3:52 pm
Neil In reply to Stephan Lewandowsky. Yes, you type MMD syntax into Scrivener. E.g., [#ernst11re;]. Then the Scrivener “compile” menu choose MMD->Latex. You can make tables with MMD, but since a lot of my thesis was already in Latex, I would surround the Latex syntax for the figures and tables with HTML comments, which are then translated verbatim during the compile process. The compile command generates a .tex file, which you then must compile using pdflatex or what have you. I believe you can skip this and go straight from MMD to PDF, but I need more customization. Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2011/11/16 at 9:17 am
Paul Gvildys In reply to Neil. The code is similar enough for the most part (I’m on enough maintenance release code reviews that this holds true for the most part).The difficult thing is not just the switching, it’s the “being pulled away from mainline dev” and so having a hard time getting the mainline dev stuff done. There’s also the case where some maintenance is so old that it runs on different platforms, resulting in having several Virtualized OSes lying about. My new gig at UBC 2011/11/16 at 7:58 am
Stephan Lewandowsky Very nice post, very helpful and succinct. What I cannot figure out is what you actually type into scrivener to get the commands into MMD and ultimately LaTex. That is, the [#Jarke..] and [^foot1] commands in your mmd file; are they typed into Scrivener exactly like that? And how do you insert figures and tables with references to it. I noted that your sample .tex file contained a \figure with a \label: How does this start out in scrivener? I am also unclear on what the final output of the scrivener compile command is: Is it a LaTeX file which you then need to process via standard LaTeX compilation or is it a complete PDF (which means scrivener somehow pulls in miktex or whatever to do the job). Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2011/11/16 at 2:55 am
Neil Do you find it difficult to context-switch from maintenance to mainline? Or is the code similar enough? My new gig at UBC 2011/11/15 at 10:58 pm
Paul Gvildys Although I am more in the practical than the research, I am interested in the last two topics you presented.The maintenance one mainly because I work on mainline development, but often assist the maintenance team on maintenance branches, and if they are all on vacation, sometimes get pulled in to fix maintenance branches.The last one I find interesting as that’s exactly how I program: I theorize and build everything in my head – software takes a physical form in my head with mechanical parts and everything – and then I go about implementing what I built in my head. It’s also how I debug bugs. My new gig at UBC 2011/11/15 at 6:06 pm
Neil In reply to David. Isn’t unpredictability somewhat axiomatic in defining ULS? It seems like Agile approaches, although hardly tested at scale, have it right when they simply accept change as inevitable. Ultra-large-scale systems: fundamentally different? 2011/11/15 at 12:11 pm
David The key challenge for LSCITS and ULS style initatives is to develop a science and engineering of software systems that enables the prediction, identification, management and troubleshooting of emergent behaviour. The fundamental problem is that current software engineering techniques do not tend to cope well with emergence. In otherwords we are not as good as industry would like us to be at the predicting and coping with non-linear interactions. Example problem 1: When ‘Company A’ scales up a distributed system from 10,000 to 100,000 nodes and it behaves in ways that their state-of-the-art models and simulations did not predict. Example problem 2: When ‘Company B’ deployed a system at their Manchester office the users benefited from its functionality and it worked as expected. However when ‘Company B’ deployed the same system at several of other sites it was resisted and workers claimed that it did not fit the way their sites worked despite them having the same business processes as at Manchester. Ultra-large-scale systems: fundamentally different? 2011/11/15 at 11:37 am
Jorge Aranda What, no knowledge of piñatas after all those years sitting next to me? (BTW for a moment I was thinking that part of the sh*t that 2006-2008 Neils went through was our seating arrangement, but then I realized we moved into the lab precisely after that period! So: after sitting next to me, sh*t transformed into gold. You’re welcome!)But seriously, I’m happy for you and for the next stage in your life. Congratulations! What I learned at UofT 2011/10/25 at 12:08 am
Neil In reply to Fabiano. Someone should put a measurable theory in place then, and we can test it to see whether what you postulate is true. So far ULSS seems like a bunch of hand-waving and unchallenged assumptions. This is all too prevalent in academic research. Ultra-large-scale systems: fundamentally different? 2011/10/11 at 8:59 am
Fabiano In reply to Neil. The challenge is not in size. I guess the fundamental difference is in real agent-orientation. What that means is that every agent is a locus of control, you don’t have a (hierarchical) centralized control and what actually matters is interaction among these “agents”. Therefore, traditional algorithms/design methodologies result conceptually inadequate. Ultra-large-scale systems: fundamentally different? 2011/10/11 at 4:53 am
Neil In reply to iandol. Todonotes works well, but to be honest tools like Word’s track changes are much more useful. But then you’d have to use Word Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2011/10/06 at 12:40 pm
iandol How does scrivener > MMD > tex handle comments? Does that link in to todonotes in your preamble? Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2011/10/06 at 11:38 am
Neil In reply to Jorge Aranda. I don’t agree that this increasing complexity is anything fundamentally new. Arguably, someone from the 80s would be incredibly surprised at the complexity of software today – e.g., a fly-by-wire system in an airplane, even a portable phone that can direct me to the nearest Starbucks. I’m a software optimist: I think by and large software has dealt with some enormously challenging systems very capably. Ultra-large-scale systems: fundamentally different? 2011/09/30 at 5:52 pm
Jorge Aranda I think the issue is not so much that these systems are “ultra” large (whatever “ultra” means), but that they are increasingly complex, and systems change in their behaviour and understandability with complexity. So—the argument goes—it’s not that we’re dealing with instructions that are orders of magnitude more numerous, but that they are supposed to serve a function in sociotechnical systems that are orders of magnitude more complex. Ultra-large-scale systems: fundamentally different? 2011/09/30 at 4:36 pm
Jeremy Gibbs In reply to Jeremy Gibbs. Nevermind, I have it figured out. It turns out the LaTeX compile option in Scrivener has a Meta-Data option. It was populated with a blank author and title, so it was printing them out regardless of my Meta-Data file. Thanks again. This cements my dissertation workflow. Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2011/09/18 at 12:38 pm
Jeremy Gibbs Thanks for the succinct post. I have everything working okay, except that when I compile the Scrivener paper, the resulting .tex file has a couple of extra things on top that I can’t seem to get to go away. They are empty mytitle and myauthor tags, regardless of whether I have defined them in the Meta-Data file. Any ideas?\def\mytitle{}\def\myauthor{}\input{generalexam-header}\chapter{Test}Hello, testing this etc. $\psi \frac{3}{2}$~\citep{AndreasEmanuel2001}\input{generalexam-footer}\end{document} Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2011/09/18 at 10:38 am
frerin Somehow this is not working for me, likely I do not understand what to put there and how to define all the stuff. Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2011/09/17 at 6:16 am
Neil In reply to Carlos. You’ll need to stick your .tex files somewhere the latex parser can find it, obviously. I don’t totally understand the tex tree on Mac, but I have my header and footer in ~/Library/texmf/tex/latex/mmd/, alongside the MMD3 samples. Then the class file is in ~/Library/texmf/tex/latex/ (ut-thesis.cls).If you post the latex that was generated on a gist site, I can give more feedback. Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2011/08/03 at 7:55 pm
Carlos Hi, nice post, but I’m at lost. I’ve installed the two packages. Then I’ve create a new document in Scrivener and I’ve add a new text with the meta-data. And then…? Can you explain or suggest me a website to understand how can I define a new latex-class in MMD3, let’s say one of the IEEE templates? Thank you. Writing Complex Latex Documents with Scrivener 2.1 and MultiMarkDown 3 2011/08/03 at 6:54 pm
julio leite Congrats. Nice post. IT failure statistics 2011/04/21 at 8:09 am
Neil In reply to Patrik Björklund. I’m not as familiar with the business side of things, but there is good research from Scandinavia on business requirements, software development, and maintenance. For example, Bente Anda’s paper on four companies building the same system. In general studying these things is challenging because of the complexity. Controlling for all the variables is essentially impossible. So most of the research is case study format, and its poorer cousin, anecdote. I too find it strange though that so much empirical software research was done in the 80s and then died out. Perhaps the lesson is that the research didn’t generalize very well to other settings (e.g., the Basili studies on NASA are a pretty unique environment). IT failure statistics 2011/01/29 at 11:15 pm
Patrik Björklund Hi Neil! It’s an interesting subject you bring up which made me wonder if there is any academic studies done recently on the current situation of IT project failure? The sources I see referred to all the time are kind of old. Thanks. (I actually got here from the mendeley/scrivener post which was also pretty useful) IT failure statistics 2011/01/29 at 1:48 pm
Neil In reply to microbe. My supervisor makes comments on the PDF I create, and I then fix things up in the Scrivener file. That way I can save a “snapshot” of my work prior to making his changes. So the Scrivener version is always canonical. I would say to pick one version and make it the main one, otherwise you’re right, it’s a nightmare. If your supervisor makes changes in Latex (lucky!) I’d just use version control (e.g., Git) and stick with Latex. For safety I have my latex export in a Dropbox folder, and save Scrivener backups (zip files) to the same folder. Some notes on integrating Mendeley, Scrivener, MultiMarkdown and (Xe)Latex 2011/01/28 at 10:40 am
microbe I’m wondering how you use those tools during revision process. I use Mendeley to manage my references and LaTeX for writing. I was trying Scrivener and it was great for drafting and smooth to move to LaTeX for further processing. Then my draft is going back and forth between my supervisor and me. In this step, I use mainly LaTeX but I’m worried a little bit the draft in Scrivener because it becomes outdated from the current version. Do you somehow sync back or just leave the draft as draft? Some notes on integrating Mendeley, Scrivener, MultiMarkdown and (Xe)Latex 2011/01/28 at 10:16 am
Evan Cofsky Isn’t it 37signals? yes, sorry, fixed it. The industrial fallacy in software research 2011/01/25 at 11:27 pm
Lean Education Software was developed for dedicated purposes for dedicated machines until the concept of object-oriented programming began to become popular , making repeatable solutions possible for the software industry. Dedicated systems could be adapted to other uses thanks to component-based software engineering. Companies quickly understood the relative ease of use that software programming had over hardware circuitry.. Management and software projects 2011/01/19 at 1:39 am
Neil In reply to Keith. Markdown doesn’t use typefaces but rather HTML tags. So the only options available are <strong> and <em>. I suspect something in the Kindle conversion is translating <em> into underlines. I’d look for the intermediate files – the XSLT for the Kindle conversion. Some notes on integrating Mendeley, Scrivener, MultiMarkdown and (Xe)Latex 2010/11/11 at 3:17 pm
Keith Do you know of any way to keep italics in my text from being converted to underlines when I compile the document. I’m using OSX and converting the document to a Kindle format. Every time I compile my italicized text gets underlined. It’s driving me a little crazy Some notes on integrating Mendeley, Scrivener, MultiMarkdown and (Xe)Latex 2010/11/11 at 3:00 pm
Muhammad AbuBakar Hello FriendsI all all are find shine . Please can some one share that while listening to an audio book, can we mark/make several bookmarks ? E.G few in the chapter one, few in chapter four etc etc. Please reply. Keep Smiling ITunes bookmarks 2010/10/20 at 5:41 am
Alberto Bacchelli Thanks Neil, this is a very interesting post! IT failure statistics 2010/08/11 at 3:06 am
Neil I forgot to mention that REFSQ2011 will have a revised timeline. The conference will be in Essen in March 2011, and the abstract deadline is October 8th, 2010. REFSQ summary 2010/07/23 at 10:20 am
Marcel vdL Nice summary of the conference and I liked being there (and meeting up).I’m not too sure I understood the plenary statements as you phrased them here, however. But let’s not get into the debate again right here… Being ‘from the industry’, I agree with your observation that the quality of the end-product should be the focus, and that the academia must be aware of this. In all fairness, I think the general trend is moving in that direction, which I was glad to see. A little less “research for research’s sake”, please. The conference was a good way to get ‘industry’ and ‘academia’ together to talk and exchange experiences. I hope to see you again next year! REFSQ summary 2010/07/05 at 3:49 am
Neil In reply to Jorge Aranda. Well one way is to tie your theory into the broader perspective … i.e. “Small companies dispense with RUP because it hurts quality” etc. REFSQ summary 2010/07/03 at 3:40 am
Jorge Aranda Nice. It sounds like it was quite fun and thought-provoking; I’m sad I missed it.“We seem to focus so much on “making RE better” that we lose sight of the ultimate goal, which is to make better (software) products.”Yes, I agree. But isn’t this the bane of all specializations? Is there really a way out of it? REFSQ summary 2010/07/02 at 9:18 pm
Neilfink08 In reply to Alecia. Thanks Alecia! What does it mean to have a baby? 2010/03/20 at 9:43 am
Neilfink08 In reply to Jorge. Piñata! Haven’t done that since I was a kid. Daytum is pretty simple, but it was enough for what I wanted. You could get better visualizations from excel entry (I think you had a similar thing from Seattle?). What does it mean to have a baby? 2010/03/20 at 9:43 am
Jorge You missed the breaking of the pinata at the lab! What is this daytum thing? Is it just data entry & visualization, or does it have some way to track time that I’m missing? What does it mean to have a baby? 2010/03/20 at 9:03 am
Alecia Awww that’s great Neil! It’s a little late, but congratulations nonetheless What does it mean to have a baby? 2010/03/19 at 11:50 pm
Neilfink08.wordpress.comx In reply to Bill Conniff. The files I had did not have CR/LF characters (which are just whitespace elements in XML). And the question is not “will VIM show it” but rather “how long do you want to wait”. I certainly wouldn’t want to edit that size file using XML mode. MSR Challenge: large files revisited 2010/03/01 at 9:59 am
Bill Conniff What if a bad character occurs in a file with no carriage returns or line feeds (common in xml messaging)?Will vim show a 3GB file in a tree? MSR Challenge: large files revisited 2010/02/28 at 4:17 pm
Neilfink08.wordpress.comx In reply to Steve Easterbrook. But then we get into the challenge of educating scientists about parallelization and optimization and discretization. Topics even experienced programmers don’t understand very well. Science was easier when we just had slide rules and log tables. Open science and workflows 2010/02/02 at 12:06 pm
Vítor Souza Hey Neil,Good points. I’ve recently bought an Android phone and I’m enjoying it. But you gotta have a reason to make this kind of upgrade, otherwise it’s just wasting money. Cheers from Trento! Vítor iPhone? Am iMissing something? 2010/02/02 at 5:11 am
Steve Easterbrook I think one of the biggest challenges is to get design choices about parallelization and algorithm optimization up there in the “language of science” representation, so that these are no longer an afterthought. Open science and workflows 2010/02/01 at 12:46 pm
Mr. Gunn Neil, I like how you’ve got your collection feed going into Friendfeed. I’ve got one set up that way, too. A “just bookmarked” feed should be coming soon, too. Thoughts on open notebooks for software scientists 2010/01/29 at 12:35 am
Neilfink08.wordpress.comx In reply to rogerthesurf. Now, what I want is, facts. Teach these boys and girls nothing but Facts. Facts alone are wanted in life. Plant nothing else, and root out everything else. You can only form the minds of reasoning animals upon Facts: nothing else will ever be of any service to them. This is the principle on which I bring up my own children, and this is the principle on which I bring up these children. Stick to Facts, sir!— Dickens Understanding climate change with anecdote 2010/01/28 at 4:48 pm
rogerthesurf Right on Neil, Are you afraid that your faith in Global Warming is caused by CO2 might be threatened? Return to facts for once in your life! If you can find logical and well referenced facts to refute what is on my blog, I am all ears, and I always allow ALL comments there as well. Cheers Roger Understanding climate change with anecdote 2010/01/28 at 4:45 pm
Neilfink08.wordpress.comx In reply to Jorge. Somehow I haven’t got around to reading his blog… Understanding climate change with anecdote 2010/01/28 at 3:46 pm
Jorge Looks like you got your own climate denial troll, Neil, I’m envious! Understanding climate change with anecdote 2010/01/28 at 3:44 pm
rogerthesurf Neil,I trust you read my blog then? I agree wholeheartedly that our planet needs attention but what I am saying is that chasing after the life giving gas CO2 as the culprit, will simply divert resources away from the real problems that need attention such as heavy metal pollution, genuinely poisonous gases and chemical waste etc. AND if the IPPCC have their way, meeting the carbon emission targets and the proposed transfers to third world countries they propose will simply break the world economy. In the pipeline for my blog is a page that will show how the proposed targets if implimented will cause general economic collapse and likely starvation for you, me and our children. All this for an unproven discredited hypothesis? Please watch my blog Cheers Roger http://rogerfromnewzealand.wordpress.com Understanding climate change with anecdote 2010/01/20 at 6:12 pm
Neilfink08.wordpress.comx In reply to rogerthesurf. Roger, you can absolutely do something about it. Start with driving your car less, eating more vegetarian foods, reducing home power consumption, composting, etc. No matter what your beliefs on the global conspiracy, I fail to see how these actions will harm you – and they may even do some good. Good luck! Understanding climate change with anecdote 2010/01/20 at 10:49 am
rogerthesurf There might be global warming or cooling but the important issue is whether we, as a human race, can do anything about it. There are a host of porkies and not very much truth barraging us everyday so its difficult to know what to believe. I think I have simplified the issue in an entertaining way on my blog which includes some issues connected with climategate and “embarrassing” evidence. In the pipeline is an analysis of the economic effects of the proposed emission reductions. Watch this space or should I say Blog http://www.rogerfromnewzealand.wordpress.com Please feel welcome to visit and leave a comment. Cheers Roger PS The term “porky” is listed in the Australian Dictionary of Slang.( So I’m told.) Understanding climate change with anecdote 2010/01/20 at 5:20 am
Cameron Neylon I think its a really interesting question what the threshold is for different purposes. I mean that there is no reason not to record everything, because it is “easy” and comprehensive. But when you’re presenting that for some specific person or system for a specific purpose you will want to summarize it in some way. The choices you make seem to me to depend on what the purpose of your communication is and what/who the target is. Trivial example, if you want to show to a software developer a problem with their system you want a different kind of summary than the cleaned up and streamlined version that you might submit with a paper. But there are a lots of subtleties here. What do you think about the ideas I suggested about capturing the relationships between the objects you created? Does that work in your context or is there too much command line work between the creation of the relevant objects? A better scientific notebook 2010/01/16 at 6:17 am
Anonymous I think you have to believe in intellectual equality in order to buy David’s argument. This is also making a lot of assumptions about rationality. Given that 60% of the adult population has problems with formal reasoning I’m not sure I can care that much about that long tail, if 60% could be bunk. Look at politics, look at the US. In the southern US you had 80% of white males agree on something. Is this the long tail? Collective intelligence has reared its head in markets, especially mortages and real estate. It didn’t work out either. I remain highly skeptical of this version of “long tail” intelligence. The Long Tail and expertise 2010/01/15 at 11:05 am
Neilfink08.wordpress.comx In reply to Anonymous. Yes, those ideas are good – I use TiddlyWiki and Git for my work. However, re-reading these notes or recreating workflows from commits done months in the past is not quite what I was getting at. What you really need to do is document why certain constants are used, why you are excluding things below a certain threshold, and so on. Interestingly I think these small but sometimes very important decisions are very hard to pick up in peer review. A better scientific notebook 2010/01/11 at 8:03 pm
Anonymous I recommend a tool that can tagged and timestamped notes. Sometimes these lab notes help quite a bit. I also recommend using version control for everything you’re doing and carefully documenting each commit. This commit document is your research logThe tools are there all that is needed is the will power on your part. A better scientific notebook 2010/01/11 at 5:24 pm
Anonymous Here’s how you solve a problem on wikipedia or the Internet at large, watch and learn: http://www.google.ca/search?hl=en&source=hp&q=perl+sucks&btnG=Google+Search&meta=&aq=&oq=perl+sucks 552,000 for perl sucks http://www.google.ca/search?hl=en&safe=off&q=C+sucks&btnG=Search&meta=&aq=f&oq= 28,100,000 for C sucks http://www.google.ca/search?hl=en&safe=off&q=Python+sucks&btnG=Search&meta=&aq=&oq=Python+suck 1,100,000 for Python sucks Obviously Python sucks 100% more than Perl sucks. It must be due to Python’s lack of lexical scope (note how a conclusion is made with a complete lack of evidence). While C sucks 28X more than Python. Data and science in enterprise computing 2010/01/06 at 7:44 pm
Neil In reply to Greg Wilson. Googling “Perl sucks” led me to this: http://rs79.vrx.net/opinions/computers/languages/PerlC/. Not one to engage on substantive issues, are you? Data and science in enterprise computing 2010/01/06 at 2:18 pm
Greg Wilson You say, “here is a trend in the software blogosphere to use one or two data points as solid evidence that “C sucks” or “Perl is unreadable””, but don’t provide a citation. Data and science in enterprise computing 2010/01/06 at 2:10 pm
Rui Curado You may want to keep an eye on ABSE (http://www.abse.info). ABSE is a code-generation and model-driven software development methodology that is completely agnostic in terms of platform and language, so you wouldn’t have any trouble applying CBSE or any other approach you would like. The big plus is that you can generate code exactly the way you want. The downside is that you may have more work to do at first to build your templates. But this is a common scenario in all model-based approaches. After all, it’s “model-driven”, so you’ll have to build the models! ABSE allows you to capture your domain knowledge into “Atoms”, which are basically fragments of larger models you can build. ABSE is both declarative and executable. The model is able to generate code by your specification and incorporate custom code at the model level.Unfortunately, ABSE is still work in progress and an Integrated Development Environment (named AtomWeaver) is still in the making. Anyway a CTP release of the generator is scheduled for Jan-Feb 2010, so we’re already close to it. My experience with model-driven development 2010/01/01 at 7:22 am
anonymous Investigate the Common Lisp loop macro to see how hard it is to do simple things. Now scale up to MDA My experience with model-driven development 2009/12/30 at 9:59 pm
Frank You saved me!!! ITunes bookmarks 2009/12/26 at 6:13 pm
martin It is a shame that common sense like “certain things are useful in certain contexts” don’t get twittered, redditted, blogged, digged, slashdotted I study software, not software engineering 2009/12/26 at 3:23 am
Prabhakar Karve I came to this blog by chance, but found lot of sense in what you are saying. While building and maintaining software, we need to take care of two diagramatically opposite systems, namely the production system and the innovation system. The production system needs to be predictable, repeatable, measurable, deterministic, hierarchial and low risk. On the other hand, the innovation systems are uncertain, exploratory, judgemental, ambiguous, cross-functional and high risk. By whatever name we call it, software engineering MUST help us balance these two systems in the most approrpiate way for a given situation. I study software, not software engineering 2009/12/14 at 3:10 am
Wyatt @ManuelWeb design is not the same as developing for the Web. I’m not sure what the point of conflating the two is. I study software, not software engineering 2009/11/30 at 2:20 pm
Neil The math is a little beyond the time I have, but … this seems to address the space issue — won’t a generator do the same thing? — but I don’t see how it handles the time issue. For N items I still need to do 2^N operations, don’t I? I really need to filter the subsets of a given solution, so I never need to operate on them. Thanks for the links. My sample Google/Microsoft interview question 2009/11/14 at 4:25 pm
Anonymous Knuth’s Art of Computer Volume 4 on Combinatorics, Permutations and Combinations would give you the answer.As long as you are allowed to check each subset once and in order you can do this with the storage limit of a single job. How? Iterate through all combinations, one at a time, and then operate on each of those subsets. So as long as all you need to do is iterate and test, you’re good, you don’t need crazy memory. Donald E. Knuth, The Art of Computer Programming, Volume 4, Fascicle2: Generating All Tuples and Permutations. Addison Wesley Professional,2005. ISBN 0201853930. Donald E. Knuth, The Art of Computer Programming, Volume 4, Fascicle3: Generating All Combinations and Partitions. Addison WesleyProfessional, 2005. ISBN 0201853949. Michael Orlov, Efficient Generation of Set Partitions, http://www.informatik.uni-ulm.de/ni/Lehre/WS03/DMM/Software/partitions.pdf. also if you like perl: http://search.cpan.org/~fxn/Algorithm-Combinatorics/Combinatorics.pm My sample Google/Microsoft interview question 2009/11/14 at 4:04 pm
Neil In reply to Jon P. S is a set in P, I should fix that. The trouble is that f(1) = True and f(2) = True does not imply F([1,2]) = True. What I’m dealing with are requirements, so P is a powerset of requirements, and requirements might interact, e.g., requirement 1 makes it impossible to achieve requirement 2. We want to maximize the set of requirements we can achieve (where f() tells us if we can achieve that set). So we just remove subsets of it without knowing or caring what their individual results are. thanks for the comment! My sample Google/Microsoft interview question 2009/11/14 at 1:23 pm
Jon P Hey Neil, I probably don’t understand the problem exactly because the following solution seems trivial. First off, I assume that f takes an element of P (the powerset)… that is, f takes a set. In your post you say it takes a subset, but that would mean f takes a set of sets, no? I assume not. Anyhow, if f returns true on set S then you say the we must remove all of the elements in the power set of S from P. Presumably f also returns true for all the elements of the powerset of S if it returns true for S. If so, then it returns true for each of the original set items (the ones that which P is the powerset of) that appear in any S which f(S) returns true on. So solution: just run f on every element in the original set of items and filter out the powerset of the set of items for which f returns true. E.g. if f returns true for f(1), f(3), and f(5), subtract powerset([1,3,5]) from P and you’re done. What am I missing? My sample Google/Microsoft interview question 2009/11/14 at 12:37 pm
Steve While this reduces the hiss is also increases the resistance in the headphones so you will loose some high frequency detail in the output, people make expensive low impedance headphones for good reason. Another solution is to use an external (USB) sound out like the griffin imic. Apple should have grounded their sound card and isolated the output better, hopefully the next gen of MBP will be better designed in this respect. Quick Tip: Macbook hissing in headphones 2009/11/10 at 3:20 pm
Neil In reply to Jorge. Yes, I’ll take a look at them soon. It does seem like the best way to plan is to conduct a meta-analysis of all the estimates to derive a best guess. De-referencing climate claims 2009/08/26 at 8:19 am
Jorge Excellent exercise, Neil, thanks for posting. I wonder if the figures in the other studies are much smaller due to economic considerations – for instance, dismissing deep offshore entirely. De-referencing climate claims 2009/08/26 at 7:57 am
anon_anon You may want to use VTD-XML (http://vtd-xml.sf.net) it has an extended version that supports documents up to 256 GB MSR: parsing large XML files 2009/08/21 at 9:40 pm
Mike Malone Dude. Freaking genius. I’ve been trying to find a solution to this problem for a while now, and this worked perfectly. Wish Apple would fix their shit, but this is a great hack. Thanks! Quick Tip: Macbook hissing in headphones 2009/08/13 at 1:47 pm
Neil Aran, I guess my question boils down to whether one needs Big Design Up-Front (and if so, when). My bias is to say that it is almost never necessary. Are we missing something? 2009/07/07 at 1:07 pm
karthik’s Software Guide well,information is good. Is it applicable in real time application? Some thoughts on lean software development 2009/07/07 at 3:33 am
Aran Donohue I don’t think it is a fair comparison. Your examples of large enterprisey things are designed in a certain way for good reasons. J2EE is big because it needs to satisfy every use case. They ask, “What technologies do we need to solve all these problems?” The agile world comes from the opposite direction. They ask, “What problems can we solve with these elegant, simple technologies?” Then they develop new simple technologies that fit nicely on that curve. As for processes, I think that canned processes have limited utility when applied to large projects. Agile needs you to have a “customer”who runs acceptance tests. A large project has dozens of different customer types. All the iterative processes fundamentally rely on short iterations. This becomes nonsensical when individual “features” are way larger than any reasonable iteration speed. As for a PhD to establish the benefits of the agile world, I think a better question would be to ask, “How do all these (smart) people NOT using these modern fancy tools manage to succeed nonetheless?” Are we missing something? 2009/07/06 at 9:01 pm
Neil In reply to Aran Donohue. Personally, no. The blogs I linked to mention some examples: Microsoft, Sprint, various corporate IT shops. IBM has an ’embrace and extend’ lean initiative: http://www.ibm.com/developerworks/blogs/page/ambler?entry=lean_development_governance Some thoughts on lean software development 2009/07/06 at 2:38 pm
Aran Donohue Nice summary. Know anyone who uses lean for software? Some thoughts on lean software development 2009/07/06 at 2:29 pm
Neil A follow-up: Tom DeMarco seems to agree that the notion of ‘engineering’ is outdated, in a recent articleon IEEE Software. He thinks we should be focusing more on delivering value, where we’ve seen some amazing success, rather than precision and meeting deliverables. I study software, not software engineering 2009/07/02 at 11:38 am
Jorge I don’t think this can be established without a detailed analysis of the context of the organization. It is easy to argue that either agile or sturdy (as I prefer to call it) approaches are better for some situation or other; I don’t think we even need more empirical evidence of this by now. The question becomes what should my organization do, given its particular context, and this is still a very open research question. Also note: we shouldn’t confuse the sturdy, large enterprise projects that often use J2EE, SOA, and so on, with the stodgy version of waterfall that is still widespread in academia, but nowhere in industry. Are we missing something? 2009/07/02 at 7:54 am
anonymous Search for tool adoption. Process adoption has the same problems. Are we missing something? 2009/07/01 at 10:14 pm
Sam Here in Manchester (UK) there are rather a lot of similar schemes to the one you mention in Toronto with cycle-strips down the side of a number of major roads in and out of the city. And it’s infuriating. There are the parked cars (which I was shocked to discover is actually legal according to local byelaws), the strips stop and start randomly at very short intervals, and worst of all much of the time the cycle provision is to share a lane with busses and taxis. Being cut up by irate bus drivers is insanely dangerous and I’ve had a number of hairy moments trying to get past on both the inside and the outside. By contrast, a recent trip to Paris revealed (for the most part) a much better engineered city, curb-separated lanes and a clear sense of priority for bikes. Your idea of dedicating a thoroughfare for cycle transit and Steve’s tales of Montreal’s bi-directional separated lanes sound like even better solutions yet. I always berate other cyclists who jump lights they are such fools. Pointless: Bike lanes downtown 2009/06/25 at 10:03 am
Manuel The problem with the bike lanes in Montreal is that we pedestrians only see them when we are already on them, and then a cyclist will run over us. I think more enforcement is needed on cars invading bike paths, but I once read that Vancouver is getting ride of their bike lanes and actually making the bikes safer… Pointless: Bike lanes downtown 2009/06/10 at 10:42 am
Manuel I do not think a web designer is a software engineer, since he is using an application, the same way that a painter is not a chemical engineer since he is using a chemical product. I do agree in the term software engineer because you are building up an application, no matter how complex (or simple) this may be. A civil engineer may be designing the next CN Tower or just paving a driveway, that do not means that he can just become an “craftman” for the later task and do not apply his knowledge and discipline in the required amounts. Not all developers are scientifics, and change and inconsistency exists in many engineers fields, not only software! I study software, not software engineering 2009/06/10 at 9:59 am
Neil @Jorge: One of the things that I think about is relevance; it doesn’t seem to bother physicists, so why do we care? I wish I could close my eyes to the problem like some in the field do. I think the issue, like the CHASE workshop mentions, is that ultimately this is a human endeavour, and so research needs to reflect that. I study software, not software engineering 2009/06/06 at 2:04 pm
anonymous It is a shame that common sense like “certain things are useful in certain contexts” don’t get twittered, redditted, blogged, digged, slashdotted. Only this extremism of opinions gets any play online, see Joel Spolsky, Jeff Atwood, Uncle Bob, etc. Common sense isn’t interesting or bloggable. No one wants to hear that XP is a really bad idea if you have very strict requirements, no one wants to hear that gee if you have requirements document already made maybe SCRUM isn’t so useful. No one wants to hear “maybe SCRUM and XP are working for you because you didn’t do anything before”. No one wants to hear this, thus developers will continue to be assailed by consultants pushing the latest greatest things. BTW you suck at SCRUM and you should hire me to tell you how you do everything wrong. I study software, not software engineering 2009/06/06 at 10:49 am
Jordi Cabotmodeling-languages.comx The fact that “we don’t do engineering the way they had hoped” does not imply that we shouldn’t do it. What we should be able to is to find out the right amount of “engineering” for each project (depending on the size of the project, the criticality of the domain I study software, not software engineering 2009/06/06 at 9:40 am
Jorge Good post; thanks Neil.“I would really like to move academic research up this list.” –many of us do, but not that many want to drop what they’re doing and study research questions that practitioners actually find relevant. I study software, not software engineering 2009/06/06 at 8:07 am
Neil In reply to Chris Siebenmann. Steve: totally agree. A German city I visited used metal bollards to create a path between the pedestrian zone and the parked cars. There are still problems at intersections, of course (not to mention the number of cyclists who ignore traffic lights).Chris: a valid argument, but all it takes is a few accidents in the bike lane for the timid to head back to their cars. Pointless: Bike lanes downtown 2009/05/22 at 10:05 am
Chris Siebenmann The story I’ve heard about Toronto-style bike lanes is that their real advantage (and possibly their real purpose) is that they encourage more people to go out and bike because they make those people feel safer. In turn this may increase actual biking safety due having more cyclists on the road. I find myself sympathetic to this story; if nothing else, having bike lanes (or even ‘share the lane’ markings and signs) sends a signal that biking is expected and being accommodated. Pointless: Bike lanes downtown 2009/05/22 at 9:57 am
Steve Easterbrook I completely agree. Toronto’s bike lanes are mostly useless. The only sensible way to do this is build bike paths that motor vehicles cannot drive or park on. Eg separated from the vehicular traffic by a raised curb. I noticed some of this in Montreal – they’ve taken a whole vehicle lane, built a curb to separate it, and put a bi-directional bike path in it. European cities get this right far more often. Pointless: Bike lanes downtown 2009/05/21 at 7:33 pm
Are we there yet? Always climbing, never arriving. Why a Ph.D. is like ice climbing 2009/05/08 at 2:05 am
Neil In reply to anonymouse. Yes, and I suppose the other would be a power-law distribution or something similar. Blue-collar compensation 2009/04/28 at 3:49 pm
anonymouse Like a normal distribution? Blue-collar compensation 2009/04/28 at 1:50 pm
Neil In reply to George. Cool! If/when I wrangle enough people together we’ll be in touch. Worldwide game day 2009/03/24 at 8:28 am
George I love roleplaying games and D&D (although not as much a 4th edition fan, I prefer 3rd. I started playing in 2nd many years ago.). Recently I’ve been on a Shadowrun (4th edition) kick, but I am looking for games to run or play in. I don’t know you I don’t think, but I know Jorge. I wasn’t aware of that gamestore or, sadly, Worldwide D&D game day otherwise I would have shown up! I live within a nice comfy 10-15 walk of there I think. Worldwide game day 2009/03/24 at 1:10 am
Jorge Sure, I’m interested. It’s been over six years since I played role-playing games, even longer for D&D, and I’m not familiar with the new edition. But I’d still like to try it out. Worldwide game day 2009/03/23 at 10:47 am
Steve Easterbrook …and of course most users just want to make sure that the system won’t be non-functional very often. There is no such thing as a non-functional requirement 2009/03/22 at 2:04 pm
Carlos Castro Good geeky post! I think the distinction exists at a higher level – when you start eliciting requirements. At this point it is easy to see that some requirements specify functionality and others specify ‘qualities’. However, as you move along in the requirements engineering process, you ultimately have to drill down and decompose the NFRs to the point where you have operationalizations for those ‘qualities’. At this lower level they are equally as functional as the Functional requirements. There is no such thing as a non-functional requirement 2009/03/20 at 4:35 pm
no name I suspect a lot of AGILE successes come from the fact they didn’t bother to DO ANYTHING before. I suspect if you add process to something that process can help with (like some aspects of software development) you might actually see gains. A lot of the pro-agile stuff you often comes from people who didn’t actually do anything before. Really the only clear agile success stories come from the authors of the XP series and even their pet project C2 was scrapped. I think anything is probably better than nothing because anything implies that your self-reflecting and thinking that you need to improve whereas nothing implies an adhoc process that you hope just works. Organizational maturity and software development 2009/03/17 at 11:44 am
Neil In reply to Jakub Narębski. Well, I think this sort of makes Abram’s point: if you have to read the manual each time, maybe the tool isn’t as intuitive as it should be. Two minor thoughts 2009/02/22 at 12:02 pm
Jakub Narębski @Abram: “I checked out an older version of a repo”… and didn’t pay atention to the message from git, hmm…? “Eventually I found git-lost-found”… no need for such a low level tool. Ordinary “git checkout -b new branch name” should be enough, and if you lost a comit, there is always reflog: “git reflog HEAD”. In short: read the manual first, please… Two minor thoughts 2009/02/21 at 9:52 pm
Neil In reply to Abram. All true. I have a feeling that if your workflow is similar to Linux, it will work for you. If not, or you haven’t got a good sense for the workflow, you’ll be in trouble. Really, for working on my small projects, SVN will do as well. Two minor thoughts 2009/02/16 at 3:58 pm
Abram My problem with GIT is that the model is dirty and unclear. GIT is real deal software and it relies on an underlying model, but when I think I’ve learned the model I find I haven’t. Or the maintainers have not given me an interface to do so. Here’s one example. I checked out an older version of a repo and I commited. Where does that commit go? Turns out it was on an non-existent branch, so I couldn’t check it out without the exact commit ID, I couldn’t do anything with it. It was effectively lost. So then I tried to apply my knowledge of GIT, first I searched for commands which would let me query the children of a commit. Nope. Then I searched for commands which let me search for nodes with a certain parent. Nope. Eventually I found git-lost-found and recovered it. I didn’t have this problem in DARCS http://darcs.net/manual/node9.html Which was built ground up on a formal model of patching. Surprisingly the creator is not some formal models/methods buff, but a physicist. I don’t use darcs much anymore because it is rather slow. Two minor thoughts 2009/02/16 at 3:37 pm
August I believe the subjunctive mood in English is still alive and kicking. It just seems dead because our verbs don’t have a lot of different endings like other languages. Except for the verb “to be,” the subjunctive is mostly undetectable in English. Only in the 3rd person singular (he/she/it) can you see it at work. Indicative: He GOES to a meeting. Subjunctive: I insisted he GO to a meeting. If I turned it around and said, He insisted I GO to a meeting, that would still be subjunctive, but it would be undetectable, no different from the indicative, I GO to a meeting. The verb “to be” tells the real tale, because the subjunctive verb form is “BE” for all persons and that doesn’t coincide with any of the indicative verb forms. Indicative: I AM here. You ARE here. She IS here. We (or they) ARE here. Subjunctive: He insisted I BE here. Or: They insisted we BE here, etc. Then there is always, “If I WERE a rich man.” “If she WERE a mermaid.” But again, you can’t hear a difference with “you, we, or they,” because they use WERE in either case. So, English speakers DO STILL use the subjunctive. It’s just hard to tell when we’re doing most of the time. The subjunctive case and intentionality 2009/02/12 at 10:52 pm
Christian Muise “The wordle diagram is as close as most will get to actually reading it :)”A sentiment shared by most people regarding their master’s thesis — mine included :p. M.Sc. thesis wordle 2009/02/10 at 10:18 pm
Neil In reply to Christian Muise. It’s been a while … “Towards Cognitive Support in Knowledge Engineering: An Adoption-Centred Customization Framework for Visual Interfaces”. The wordle diagram is as close as most will get to actually reading it M.Sc. thesis wordle 2009/02/10 at 9:36 pm
Christian Muise Heh. I’m impressed. What was the title of your thesis? M.Sc. thesis wordle 2009/02/10 at 8:08 pm
Jorge Great post. From a grad school perspective though, I think we are slow in catching up with developments in the real world. I remember the panel of a Computer Supported Cooperative Work conference where the panelists were beating themselves (and the community) down for not predicting nor reacting quickly enough to the greatest development of CSCW in history: the Internet. Computer science is doomed! 2009/02/04 at 8:41 am
Jorge Very interesting points. I’m not sure the thalidomide analogy is appropriate. If a software company uses a disastrous ‘solution’, it goes out of business. Successful practices then replicate in a process similar to evolution. So if something has been widely used for a while, is it not reasonable to assume it works sufficiently well? Empiricists vs constructionists? 2008/10/25 at 10:37 am
Sherdim I join my soul cry to this post! I have searched for a couple of years for personal organizer/note clipper instrument. Although I had tried TW for a year ago I hoped to find something more semantics aware, more tunable and may be more intellectual (auto syncronizing, everything compatible etc.)Just every described step was done! Instead Tomboy I tryed WikidPadAt the end I found MGTD, based on TW, and though those organizing conception is not very suitable for me, the tagging scheme and compatibility of TW have led to the decision. I am a researcher too. So my specific needs in organizing everyday operations I hope to do step by step with JavaScript plugins which are easy for TW. About its not-standard wiki format: There is a standard plugin for RSS-export. It can be tuned so every edit will generate RSS feed prepared for publishing or importing into everything. Good luck! Research note-keeping 2008/10/14 at 10:21 am
PM Hut The 68% is highly subjective, because the definition of failure is. What is failure, is it a dead project? or is it a project behind schedule and/or overbudget and/or with lower quality/less features than scoped? or is it a internal assessment from the stakeholders? I’ve seen 30%-40%-50%…90% failure rates, almost all those stats target the IT sector, other sectors have much lower failure rates, but usually, in non-IT cases, the failure is of catastrophic proportions… Requirements and business project management 2008/10/08 at 2:43 am
Vic Geemind-mapping.orgx I have a site giving a database of information management tools with thumbnail samples. It includes mind-mappers, concept mapping software, outliners and a number of other graphical tools. You can select to see just those for a specific OS, so you could choose just Mac to see those. If you use one of the browser-based ones, you’ll be pretty well cross platform whatever machine you want to use. Bubbl.us sounds as if it might be for you – it allows disconnected sections, a web, or hierarchical structure – depends how you feel. Not sure if it allows long enough notes for you though. An academic-slanted one is Sematik. This has a mind-mapping base but is aimed at producing finished documents, which may solve your ‘never find myself returning to those notes’ problem. Vic http://www.mind-mapping.org The master list of mind mapping &information management software Research note-keeping 2008/09/19 at 10:52 am
Sandy Nice post! Tomboy does support tagging at the API level, and in fact Notebooks are just a special kind of tag. We found that for our users, notebooks were a more useful concept, especially considering how fast and easy note search is. That being said, plenty of people want a regular tagging UI, so don’t be surprised if an add-in shows up one of these days. We experimented a lot with it before deciding on Notebooks, so there’s even old code floating around in SVN for interested parties. You may also be interested in using Conduit to sync your Tomboy notes. Though I haven’t used it myself it seems to be a popular approach if you can’t set up your own ssh or webdav server. All that being said, TiddlyWiki is a great tool and I’m glad you’ve found something that works for you! Research note-keeping 2008/09/19 at 9:10 am
B. Shaw Great post! Another web app you didn’t mention is Springnote (www.springnote.com). It has a ton of features great for note taking and collaborative tasks. You can quickly get on Springnote, edit notes, and then log off. It’s that easy. Research note-keeping 2008/09/19 at 3:58 am
Jorge I think your reading of this is correct, both in that Microsoft would like to provide better abstraction mechanisms for developers, and in that UML is pretty much off the radar here. I’ve seen lots of diagrams here, actually, in the few weeks that I’ve been around. But they’re mostly used for informal communication and as a flexible abstraction tool. In comparison, UML 2.0 is stodgy and cumbersome, and MDD is as far from being an abstraction as code itself is. I hadn’t heard of the Oslo project before. I know it wasn’t even mentioned in Bill Gates’ farewell ceremony when he and Ballmer discussed the future of Microsoft. UML – Poised for takeoff? 2008/07/11 at 4:14 pm
roy Journals, in my opinion, have much more impact and relevance, firstly because many of the journal papers are selected from best papers presented at conferences, and secondly, because the review process is more thorough and constructive. Ranking software engineers 2008/05/12 at 5:41 pm
Jerash from my limited (practical only) experience it was always about getting something done to give the impression of progress to clients – seems cynical but the phrase “we are 95% complete” is very misleading on its own – what are the metrics? and what are the criteria for success should be every client’s question – how you teach that is a tough problem On Software Schools 2008/04/27 at 11:39 pm
shailly Interesting….keep it up. On Software Schools 2008/04/21 at 6:22 am
Jorge Good post. I agree with you, although I see where Fowler is coming from: the idea that there are best practices leads to certification, which leads to bureaucracy and inefficiency. I think identifying the context is key here, and you mention this in the end. For each team, customer, and software project, there is probably one best approach, or school of software, as Fowler calls them. We don’t know them yet; as scientists we hope to discover them. For now, stating that all schools might be valid for some context is the best we can do. On Software Schools 2008/04/13 at 9:58 am
Tarah Wheeler Thank you very much! I’m a big audiobook fan, and I had no idea how to do this. If only iTunes would get a clue and add a help feature for “bookmark”! Doesn’t that sound reasonably intuitive to you?? ITunes bookmarks 2008/04/07 at 2:47 am
Anthony Brown Cheers! ITunes bookmarks 2007/11/22 at 7:27 pm
Jerash Sounds like a wonderful trip and you are encouraging to others who might be considering a similar endeavor. Camping in France: some tips 2007/11/15 at 2:25 pm
Jorge Aranda Complexity is surely a factor, but I think it is only one of several. There are companies that deal with very complex projects and yet shun the systematic use of UML. Microsoft is one of them. In a paper I discussed recently, Cherubini said that in his discussions with Microsoft engineers, he found that “most of the diagrams had a transient nature because of the high cost of changing whiteboard sketches to electronic renderings. Diagrams that documented design decisions were often externalized in these temporary drawings and then subsequently lost”. (Oh and Owen produces some excellent hammers! Mostly everyone in Toronto seems to be using them An explanation for UML usage statistics? 2007/08/25 at 12:34 pm
Mama Ernst Better late than never. Nice way to show some photos with captions. I assume it was faster than using blogspot, each pic took about 90 seconds to upload, and when you’re in net cafe, that can be costly! Experimenting with a Flickr photo browser 2007/08/15 at 11:52 am
Tomasunitedstates4africa.web.netx Well done Neil – give me a break with the lack fo transparency. I would really like to check my diplomacy at the door, but will resist temptation. It would seem our man Jorge has revealed the answers to your questions and despite Harris’ protestations to the contrary, re protecting donor confidentiality (when are donors ever NOT interested in publicity, except when there is something to be hidden!), the NRSP is a sham. No surprise the NP publishes their propaganda. What are they hiding? 2007/08/12 at 12:56 am
Jorge Aranda Apparently you’re not the only one that can’t get this information, and it seems you never will: From Wikipedia: “The NRSP has been criticised on the basis that it is an industry-funded body which presents itself as a grassroots organization, an activity referred as Astroturfing. Harris rejects this criticism but refuses to reveal the sources of NRSP funding.”From SourceWatch: “According to an October 16, 2006, CanWest News article, journalist Peter O’Neill asked Harris about who financially backs the NRSP. O’Neill reported that, according to Harris, “a confidentiality agreement doesn’t allow him to say whether energy companies are funding his [the NRSP] group.” [25] Subsequently, Harris stated that there was no “confidentiality agreement”. He also insisted that “it is normal for non-profit entities like NRSP to protect the privacy of supporters by not publicizing contributions.”And from desmogblog.com: “Two of the three Directors on the board of the Natural Resources Stewardship Project are senior executives of the High Park Advocacy Group, a Toronto-based lobby firm that specializes in “energy, environment and ethics.” (…) Timothy Egan, is the president of the High Park Advocacy Group, and a registered lobbyist for the Canadian Gas Association and the Canadian Electricity Association.” What are they hiding? 2007/07/09 at 11:45 pm
Anon Thanks, I was going nuts trying to figure this out before I found your post. ThAnkS alot!!!!!! ITunes bookmarks 2007/06/13 at 12:14 pm
David Locke Why are requirements non-deterministic? The non-determinism originates in the fact that the inputs change and that there is rarely a single source for those inputs. Given the elicitors role in asserting the elicitor’s own needs over those of the requirements source, the problem starts there. Change the elicitor, change the requirements. Then, you have the efficency focus of requirements elicitation process, which says get all the requirements sources into a room and have them decide what the requirements will be, which in turn means sacrificing requirements to utility functions, and washing away the cultural differences of the users, so the requirements can be efficently developed. This later issue injects politics and sociology into the elicitation process, thus so much for science, and hurah for requirements volitility. The software world has and continues to ignore culture, aka meaning, in the systems it develops. Silo busing, integration, and real-time data warehouses is just making it worse. Developers are much less expensive now, so lets put an end to efficent development as a goal, and let’s put a stop to generic software that fits no particular user and drives up those negative-use costs that accountants don’t account for, but every CFO feels. If you elicit requirements from one person, they stop being non-deterministic, they make that person efficent, and they are much less volatile. The elicitation process is not to blame for the non-determinism. By way of a practical application, say we are developing a cost accounting system. Which will it ultimately be: traditional, activity-based, or throughput? Throughput being the most recent, it has the least amount of adoption by older accountants, more by the fresh out of school accountants, and the least amount of adoption by management, aka the utility function. So it won’t be throughput, unless the management has a vocal early adopter amoung it. Activity-based will probably win, because it is the majority paradigm today. Traditional will probably lose, because it is basically fostered by the older accountants waiting for the age-based layoff, or retirement. Still, all three will be elicited. Then, they will be fought about with the utility function winning the day, expertise being ignored, and knowledge being destroyed. Once the development of those requirements are commissioned, the requirements politics will continue to churn the requirements. The end result will be a mess, which will require participants from all three paradigms to build Excel spreadsheets to compensate for what the system won’t do–time wasted invisibly, except on the actual, non-accounting, bottom line, the negative use costs. This more than determinism is the real problem with the requirements elicitation process. Repeatability in requirements elicitation 2007/03/13 at 9:25 am
Jorge It is definitely a great movie; I’m glad you liked it too! The translation indeed would be “The Labyrinth of the Faun”. The faun, which I believe in this case stands for “male fairy”, would be the character Pan, not the deity. Also, in Spanish “pan” means bread, so “El Laberinto de Pan” would have a very bizarre meaning for us –The Labyrinth of Bread. As for whether the Spanish Civil War resonates with Mexicans: during the war, many Spaniards escaped to Mexico, and were generally prosperous there. So among many middle- and high-class circles in Mexico, the Spanish Civil War felt close and is still relevant. I assume Guillermo del Toro, the director, was raised in this environment: he has one other movie set in the same period, The Devil’s Backbone. In general, Mexico has a complex relationship with Spain: To put it simply, Mexicans see Spain both as our “cultural mother” and “the thief who took away all our gold and destroyed our native civilizations” (everyone’s views are more elaborate than that, of course, but that gives you an idea).I’d also recommend another recent movie by a Mexican director: Children of Men. The best movie I’ve seen in a long time. Movie recommendation 2007/03/07 at 9:19 pm
Jorge I agree as well. Remember that we’re stuck with the ‘software engineering’ label almost by accident -because the organizers of a crucial conference decades ago felt we should be striving towards the engineering ideal. Since then people have tried to match software development to engineering processes, never satisfactorily. This is the second reccomendation I get about Cockburn’s book –I should check it out. Cockburn on the 3 pillars of software engineering 2006/12/13 at 5:07 pm
Markus Neil – I liked your post. Especially the aspect that agile approaches put the responsibility back to people (as opposed to processes). That might explain the advantage that agile approaches have over process-heavy ones. I think though that especially large, complex and/or safety critical software systems will continue to rely on process-heavy approaches. Beyond a certain point, I would assume that agile approaches are too unstructured, too unpredictable and too unorganized. Regarding your pole analogy (which I liked very much). You might want to use the north pole in your story though: Because it (exclusively) consists of ice (as opposed to the south pole), it is there where you are actually confronted with a moving target – which is the ice masses floating around the (north!) pole. Cockburn on the 3 pillars of software engineering 2006/12/08 at 10:41 pm
Neil Good points about information overload. I think tagging might be useful here, esp as the arXiv categories are VERY coarse-grained. This is especially true in the multi-disciplinary era we seem to be in. My other, more cynical observation would be that your second reason is equally applicable to conference proceedings! Perhaps to a lesser degree, though. The role of arXiv in information science research 2006/11/16 at 3:26 pm
Jorge Interesting post, Neil. I think the first problem you talk about (just how useful is this paper?) is more important than it seems, for two reasons. First, even considering only peer-reviewed work, there are hundreds, perhaps thousands, of relevant papers produced every year, in any field. Just keeping on top of these involves a great deal of time. Without that initial filter that is peer review, we’d have four or five times as many papers to consider. The second reason is that we often can’t determine the usefulness of a paper until after we’ve read a considerable fragment of it. For example, a paper might seem helpful until when, halfway through, you spot a glaring methodological mistake that makes the paper completely unreliable. It’s unrealistic to perform this careful, critical thought on every technical report or paper that hasn’t passed the peer review test. On the other hand, it’s still a worthy initiative, especially if you’re only browsing for very specific topics –peer reviewed publications are terribly slow, and something like arXiv could speed things up quite a bit. The role of arXiv in information science research 2006/11/16 at 3:10 pm
Sotirios Actually, the “Liaskos corollary” was a comment posted to the AI mailing list by a prof! The funniest is, I think it has been experimentally confirmed by some MIT folks. The Three Laws of Academic Publishing 2006/11/09 at 12:45 am
Jorge These are great – and “Salay’s query” is devastating. You guys must have had a lot of fun when these came up. The Three Laws of Academic Publishing 2006/10/16 at 5:25 pm
Jorge Agreed, Neil. I’m pretty convinced that software cannot (perhaps should not) be engineered, at least not in the way we understand engineering to be. It’s a controversial idea, and considering the young age of the field, perhaps premature. The gut instinct, however, suggests it’s right. My dangerous idea 2006/01/09 at 12:31 pm
Chris Fogelklou Completely, totally off topic, Neil, but I am just lettin’ you know (in case you didn’t already know) that our 10 year high-school reunion is this Saturday, Jan 14, 2006!I decided to google you because I didn’t know if you were still at UVic… Apparently not! Judging from the look of this blog, you have done rather well for yourself, at least academically (the financial part usually comes later in that case You wouldn’t be able to brag about that, unfortunately (It’s a no boasting party – teehee.)Congrats on the PhD work! Hope you can make it, but I won’t get my hopes up since you’re out in T dot. Cheers, Chris members.shaw.ca/spectrum1995reunion My dangerous idea 2006/01/08 at 10:59 pm
Anonymous Thanks for the info…Seem to work OK ITunes bookmarks 2005/12/08 at 9:19 pm
Yaroslav Bulatov People haven’t proved that there isn’t a killer algorithm for learning to predict relevant websites, but there’s something related, the No Free Lunch theorems, http://www.no-free-lunch.org/Basically an algorithm that works exceptionally well in one area, is bound to be exceptionally bad in another area. This implies that a good algorithm for google would have to be hand-taylored to it’s prediction task, and that we would have to hand-code a lot of the knowledge that it’s trying to extract from the data, into the algorithm itself. An interesting application of “no free lunch” philosophy is the idea of anti-learning. Instead of designing an algorithm to perform well on realistic data, we can design an algorithm to perform badly on random data, to the same effect., Peter Norvig talks at UofT 2005/11/23 at 2:53 pm
Stephen Fickas You note the following question: Big question: is the RE goal achievable? I.e. can a sufficiently detailed analysis actually produce something perfectly in line with user expectations? Almost certainly not. This is why, I conjecture, CWA type analysis is needed, since it provides a domain model (as RIck is suggesting) that the user can use at the KBB level, to deal with unanticipated/unexpected events. If no, perhaps just use a small goal model and define a tight system boundary. We’ve started to run up against this question. Might want to look at my RE05 paper off http://www.cs.uoregon.edu/~fickas. I think the notion of a “goal attainment scale” is interesting. Steve GADG: Requirements monitoring 2005/11/21 at 12:35 pm
Piotr Kaminski First, a quibble: Reef is all about integrating code and UML with programmer interference. Automated RE systems are a failure except for very specific scenarios. However, Reef also treats developer attention as a precious resource, and tries to leverage their effort far more than other tools I’ve seen. The very simple difference between Reef and MDA is that in MDA, the model must be a complete representation of the system and therefore bears the full complexity of the implementation. In Reef, the model is an abstracted representation of the system, with designer-selected details elided to enhance high-level understandability without affecting the implementation. MDA seems to be based on the belief that, with sufficiently powerful transformation facilities, a model can be both abstracted and complete; I think this is not achievable in the short term, and in the long term is essentially equivalent to a new higher-level programming language. Which would certainly be a good thing, but history tells us that 1) it’s not likely to be graphical and 2) it is not wise to let every fool invent his own language (as MDA seems to encourage). Trac and me 2005/09/11 at 1:32 am
Neil Interesting work, however, I think it’s getting away from the ‘Simple’ part of RSR to suggest developers use calculus of any kind. I’m skeptical any system designer will be able to grasp that. RSR: implementing Really Simple Requirements 2005/06/14 at 4:23 pm
Jon Hall Dear Neil (?),I, too, begin with Jackson’s framework. Here at the Computing research Centre of the Open University we have a very active Problem Frames group, and that includes foundational research on ESR-tuples (we use WSR (World) or KSR (Knowledge):-).I’d like to bring your attention to some work we have done; the web-page is http://computing-reports.open.ac.uk (look for Jon G. Hall and Lucia Rapanotti), some of which addresses the issues you raise in the first paragraph. I would also be interested to see if our framework (see http://computing-reports.open.ac.uk/index.php/2005/200505) work could extend to cover problem solving in Agile methodologies. If you could be interested, please email me. Very best wishes, Jon RSR: implementing Really Simple Requirements 2005/06/14 at 5:07 am
Neil Sure, certainly. I should really post the various hacks I’ve collected over the years somewhere. I may have to do some tracking on that one as I’ve moved computers since then. ACSE and Portland 2005/02/24 at 1:17 am
Tom Heath Hi Neil,Your RDF output styles for EndNote sound really interesting and may be useful in a little project I’m working on. Would you be prepared to share the EndNote style file, or walk us through how you did it? Tom. ACSE and Portland 2005/02/23 at 12:07 pm
Anonymous Neil, my legs are twitching just reading this. Correct your time inthe opening paragraph, you added 10 minutes! Dad and I were thinking of you as we did our runs on Sunday from 9:15 to 9:45, just about your ‘wall’ time. Too bad the telepathy didn’t work. Do you recall swimming races you did in school? I think it was with St. Michael’s. Boring to watch, but at least the spectators were warm and there was always a snack bar! Love from Mum The Niagara Marathon 2004/10/26 at 5:41 pm
Neil In reply to Alexy Khrabrov. If you use Scrivener and MMD export, anything between is passed directly through as Latex. So for complex tables or math I prefer this approach to the MMD footnotes and Unicode math symbols. e.g. <!– \begin{table}[h] \caption{Harker’s types of requirements change (after \cite{harker93})} \centering \label{tbl:harker} \begin{tabular}{ccc} –> will be skipped in the output conversion (since it is converted to XHTML), the comments removed, then parsed by Latex as a table. Very handy. Some notes on integrating Mendeley, Scrivener, MultiMarkdown and (Xe)Latex 2010/09/23 at 9:16 pm
Alexy Khrabrov How exactly do you surround LaTeX with HTML comments? An example would be great! Some notes on integrating Mendeley, Scrivener, MultiMarkdown and (Xe)Latex 2010/09/23 at 2:29 pm