Why macros are evil




















Great article! It turns out the limitation is with the authors of the code using ancient techniques. Thanks again. I now know I am not the insane one! This article is really great! The best I ever read to write good Arduino Lib. I have a question regarding macros.

Is there a way to avoid the preprocessor replace this macro other than undef isfinite , which appears dangerous? If you wrap the include of the library and place the undef there, most likely you avoid problems.

Wrapping the library would be the best way to deal with this situation. This site uses Akismet to reduce spam. Learn how your comment data is processed.

Avoid Macros for Constants There are many people using macros for constants. First an example of bad code using a macro definition as a constant. But let us have a look at the produced machine code: ldi r24,lo8 10 ret There is absolutely no difference in the produced machine code.

Avoid Macros for Header Guards A long time in the past, it was necessary to use macros to protect headers from being included twice by the preprocessor and compiler. Make Register Usage Configurable Sometimes you like to access a register directly, e. When to Use Macros There are a few situations where you cannot avoid using macros.

This will limit the scope of the macros, which will help you find any conflicts. You actually just have to search in one file for the problem. Add a unique prefix to the name and only use uppercase letters. A unique prefix in front of each macro reduces the risk of naming conflicts. At least you should prefix them with the name of your project, library or class. For example, here's a real Scheme macro for a functional "replace" loop, taken from here :. The key to reading this is to understand that the names immediately after "syntax-rules" are going to be keyword in the macro.

After that, you have patterns built with the keywords and variables, paired with replacements. Expressions are matched against the patterns in sequence, with the keywords lined up, and the first one that matches the expression. But there's a problem even with hygenic macros, which is the fact that the code you run isn't quite the code you wrote.

When it comes to debugging, profiling, etc. You can't single-step a debugger through a macro, or profile a macro. And it can be a huge surprise sometimes. Looking at that, you could quite justifiably ask: "What the hell is that?

The answer is that the "primitive" do-loop is actually also a macro. So macro expansion expanded them both out. And the end result is quite thoroughly incomprehensible.

I'd hate to be confronted with that in a debugger! Particularly if the body of my loop was more complicated than a single assignment statement! The Scheme system comes close to addressing that. Because the macros are so strictly structured, you can get at least some correlation between elements of the macro and elements of the executable code. But the fact remains that the transformations can totally break that.

Scheme macros are fully Turing-equivalent - you can do anything in a macro, and most things will result in a mess. Non-capturing macros with gensym is hard to do like getting scoping right is hard. It's non obvious at the beginning, but becomes natural with experience.

There is no need for any global analysis. In CL, the package system and the fact that one can't portably redefine standard functions, even locally helps, but it's a sociological solution, not a technological one. Still, in practice, Not An Issue. Your CL macro is also gratuitously obfuscated.

Quasiquotes are much easier to read again, the rules are simple and local, you just have to get used to them. Finally, you don't debug macroexpanded code, unless you're debugging the macro itself.

Do you complain that high level code compiles to barely readable asm? You have access to other ways to debug than stepping. Stepping is a low-level way to debug that, in my experience, rarely does any good. PS, your Lisp code setqs a binding you didn't create. It is global? Will it clobber something the user uses more than just by capturing? Will it die on threaded code? Will demons shoot out of your nostrils?

Nobody knows The issue of debugging macros is addressed to some degree in PLT Scheme's DrScheme environment, in their macro stepper tool. It might be fair to note that syntax-rules macros are quite different, and much more dangerous, from syntax-case macros. It doesn't help you debug the expansion while it's running.

I've always just used them for constants. If we are going to be very picky and programmers like to be picky, right? Usually, the C compiler doesn't ever get to see the macros or any statement with a. The preprocessor is the one that blindly does the search and replace. The preprocessor can be the same executable as the compiler, but not always.

The C preprocessor makes a pass through the code to deal with all the ifdefs, defines, and the like and then it goes off to f None of that changes you complaint about the macros system though. I cannot even begin to count how many times I've had the preprocessor go and put bad fortran thanks to a careless macro. The only thing that is truely missing is the ability to pass arbitrary code blocks as parameters.

I think there is no way around these sorts of problems well, maybe some ways to deal with examining macros in a debugger , because what macros are trying to do is inherently dangerous.

Nearly all interesting macros are trying to bend their way around the base syntax of the language. Unfortunately, writing and understanding new syntax is just plain harder than writing ordinary code in a language. You can't prevent people from writing bad code, but you can make it easier not to. Unfortunately you can't do some things in Lisp without macros. For example you cannot change order of evaluation without them.

So you can't write 'if, because applicative order will ruin your function on, e. I don't know a whole lot about programming yet I'm going to learn a lot more over the next couple years but I think I follow your argument, for the most part. So, if macros have all these problems, why use them at all? If you need a function that returns twice its input, why not just write a subroutine?

Are macros ever the better choice? If not, why do languages have them at all? The reason for using macros is that sometimes, rarely, they just can't be avoided or avoiding them results in even worse code in terms on maintenance. Another reason is that macros are evaluated at compile time so the cost of evaluating them is paid only once. And sometimes using macros produces cleaner code, simply because what you're macro-expanding CAN'T be written as a function.

Even a non-function macro can be worth a lot of problems:. The same happens with namespaces. The first solution is to avoid macro as much as possible. The first advantage of using a true function is that the arguments are evaluated only once. First, they ensure that the parameters are evaluated only once. Third, they are always syntactically safe, yet another thing that is not ensured by macros, especially when used in compound statements.

Second, the time for a function call is dominated by the time it takes to evaluate the arguments, so it is eventually negligible. Since not all code bases seems to be aware of the problem inherent to the CPP, you may have to deal with stupid macro names—even include guards.

The defencive solution to this problem is to undef ine macros known to cause problems:. Or you can simply undef it quietly. The proactive solution is to use smarter names for macros. So, basically, the CPP is a good tool for testing the environment, check for defined macros, and for conditional compilation but a very, very, very bad tool for code generation.

I can understand that it is tempting to use macros in C and C99 as a weak substitute for meta-programming as there are really no facilities provided by the language.

This entry was posted on Tuesday, November 17th, at am and is filed under C , C99 , hacks , programming. You can follow any responses to this entry through the RSS 2. You can leave a response , or trackback from your own site. Any other predefined macro names shall begin with a leading underscore followed by an uppercase letter or a second underscore.

However, I think you will agree that this is not the case. Here a developer could have written a real function. If you ever happened to face a project peppered with macros, consisting of other macros, then you are aware of how infernal it is to deal with such a project. An example of barely readable code is the GCC compiler already mentioned above.

I face them everywhere along with their related consequences. It is required to consider a macro right in the context of all possible options of its usage, otherwise one will likely get an extra headache such this one:. It is thought that debugging is for wimps :. It is certainly an interesting question for discussion, but from a practical point of view, debugging is useful and helps to find bugs.

Macros complicate this process and definitely slow down the search for errors. Many macros cause multiple false positives of static code analyzers due to their specific configuration. The hitch with macros is that analyzers just cannot differentiate correct sly code from the erroneous code. In the article on Chromium check, there is a description of one of such macros.

Almost always you can write an ordinary function instead of a macro. This sloth is harmful, and we have to fight against it. A little extra time spent on writing a full function will be repaid with interest. It will be easier to read and maintain the code. The likelihood of shooting yourself in the foot will be less, compilers and static analyzers will issue fewer false positives.

Someone might argue that the code with a function is less efficient. If we are talking about evaluating expressions at compile-time, macros are not needed and are even harmful.



0コメント

  • 1000 / 1000