Les Cargill <lcargill99@comcast.com> writes:> So your top example does not manage the "can't open the file case" at > all.True, I was aiming to show how the two versions differ in how they close the file, rather than in how they open it. The C++ version throws an exception if it can't open the file.> I think we have to separate this into two issues: > - Some language systems ( not 'C' ) provide for automatic calling of > destructors.Right, so I think this is a terminological quibble: RAII as I've always seen that term used, refers explicitly to style of relying on that automatic calling of destructors that is absent from C. So RAII is idiomatic in C++ and some other languages, but not in vanilla C. See: http://en.cppreference.com/w/cpp/language/raii where it says "RAII can be summarized as follows... the destructor always releases the resource".> - We wish to have the constraint that something is opened/allocated > be managed as quickly as possible and in a nicely localized fashion.Ok, that is basically the other part of RAII in the summary cited above. The RAII summary mentions exceptions but that doesn't seem important.> if anything cannot be done properly within an SNMP agent evaluating a > single PDU containing multiple varbinds, the state of the agent is to > be exactly as it was before the PDU was received. > So I think there's a way of doing the latter in 'C' by arranging things > carefully, possibly using early return.That sounds like a traditional transactional commit in a database. There are well-established techniques for implementing that, such as with a rollback log.>> It depends on what you're doing though yeah, dynamic structures are >> probably less important in MCU applications. > I'd like to see them used less in general.Maybe you'd like Ada or even SPARK/Ada (which I guess is now subsumed into Ada 2012?) better than C.> I feel like - and this would take a long time to fully write out - > that perhaps we can limit the number of times new() or malloc() are > called within many software systems and improve reliability a smidgeYes, I think MISRA and SPARK both don't allow dynamic allocation.
Languages, is popularity dominating engineering?
Started by ●December 12, 2014
Reply by ●December 16, 20142014-12-16
Reply by ●December 16, 20142014-12-16
On Mon, 15 Dec 2014 13:15:21 -0700, Don Y <this@is.not.me.com> wrote:>Hi David, > >On 12/15/2014 12:24 PM, David Brown wrote: >> On 15/12/14 18:41, Don Y wrote: > >>> As I said up-thread, let the compiler do the optimizing. Concentrate on >>> *clearly* writing what you intend. If the compiler is clever enough to >>> extract some nugget of efficiency from what you've written and >>> *equivalently* >>> transform your code into something "better", let *it* do that -- instead of >>> *you* trying to be cryptically clever in what you *write*. >> >> Absolutely. Compilers are better able to optimised from clear code than >> cryptic code - for example, they can do a better job with array expressions or >> multiplies than when someone has tried to "help" by using pointers or shifts. > >I think a lot of this (programmer) behavior comes from the days of >naive compilers -- folks got used to "being clever" with their source >to try to force particular opcodes to be generated.In the days with compilers hosted (running) in 64 KiB of core/RAM or less (with possibly overlay loading from slow floppies :-), you could not expect much optimization from most of the compilers. Some generated quite awful code without manual help (such as common subexpression extraction).. However I have seen some Fortran compilers generating so good code that it was hard to beat with manual assembler, unless you skipped the Fortran parameter passing mechanism and used global register allocations etc.>*Now*, the behavior just obfuscates what the programmer is *really* >trying to "say" (do).These days with compilers running on big platforms, much optimization can be done and there is no point in trying to help the compiler. In practice the only need for manual assembler is to utilize some special target machine instructions that can't be expressed in HLL. I very much doubt that a compiler could detect a section of C code doing FFT and generate a special instruction doing FFT, if such instruction is available.>>>> Minimising the scope /and/ lifetimes of variables is always good >>>> programming practice, and gives the compiler the best chance at finding >>>> flaws and generating good code. >>> >>> +1 IME, "block scope" is too infrequently exploited. >> >> Not by me it isn't! I am also a fan of C99's mixing of declarations and >> statements, so that you don't have to declare variables until you actually need >> them. > >Again, I think this is a legacy behavior. People get used to declaring >variables at the top of a function -- because you *had* to.One thing that I miss about Pascal is that you could declare procedures within procedures and being able to access the local variables from the outer procedure from an inner procedure scope. In C you would have to declare them outside the function or pass a huge number of parameters. In modern C/C++ using inlining helps somewhat.
Reply by ●December 16, 20142014-12-16
On 15/12/14 21:15, Don Y wrote:> Hi David, > > On 12/15/2014 12:24 PM, David Brown wrote: >> On 15/12/14 18:41, Don Y wrote: > >>> As I said up-thread, let the compiler do the optimizing. Concentrate on >>> *clearly* writing what you intend. If the compiler is clever enough to >>> extract some nugget of efficiency from what you've written and >>> *equivalently* >>> transform your code into something "better", let *it* do that -- >>> instead of >>> *you* trying to be cryptically clever in what you *write*. >> >> Absolutely. Compilers are better able to optimised from clear code than >> cryptic code - for example, they can do a better job with array >> expressions or >> multiplies than when someone has tried to "help" by using pointers or >> shifts. > > I think a lot of this (programmer) behavior comes from the days of > naive compilers -- folks got used to "being clever" with their source > to try to force particular opcodes to be generated. > > *Now*, the behavior just obfuscates what the programmer is *really* > trying to "say" (do). >Yes. I have worked with compilers that needed such "clever help" in order to produce efficient code. Thankfully, I left such tools behind for the most part. (And when working with such tools, I always examined the generated code to see the results.)>>>> Minimising the scope /and/ lifetimes of variables is always good >>>> programming practice, and gives the compiler the best chance at finding >>>> flaws and generating good code. >>> >>> +1 IME, "block scope" is too infrequently exploited. >> >> Not by me it isn't! I am also a fan of C99's mixing of declarations and >> statements, so that you don't have to declare variables until you >> actually need >> them. > > Again, I think this is a legacy behavior. People get used to declaring > variables at the top of a function -- because you *had* to.For some people, that's the case. Others think it is somehow clearer, or better style, to declare variables at the top of the function. People have strange ideas about code style!> > One of the hazzards (inconveniences? inefficiencies?) or dealing with > compilers of different vintage, different languages, etc. is the effort > required to keep "best practices" in tune with the *current* environment. >Yes.> E.g., I have to make a conscious effort to alter my coding style to > exploit tuples or lists under Limbo; then suffer the opposite problem when > I carry-over that same style to C and find the compiler "unreasonably" > complaining. > > "Whaddya mean, 'syntax error'???" > > :-/I know the problem!
Reply by ●December 16, 20142014-12-16
On 16/12/14 09:25, upsidedown@downunder.com wrote:> One thing that I miss about Pascal is that you could declare > procedures within procedures and being able to access the local > variables from the outer procedure from an inner procedure scope. In C > you would have to declare them outside the function or pass a huge > number of parameters. In modern C/C++ using inlining helps somewhat. >gcc supports nested functions, if you don't mind using such a gcc-specific extension. Personally, I never made much use of nested functions in Pascal - it made the outer function so inconveniently long. Alternatively, C++ lambdas can act as local functions in some ways - and of course C++ classes also cover many use-cases for nested functions.
Reply by ●December 16, 20142014-12-16
On 12/16/2014 1:25 AM, upsidedown@downunder.com wrote:> On Mon, 15 Dec 2014 13:15:21 -0700, Don Y <this@is.not.me.com> wrote: >> On 12/15/2014 12:24 PM, David Brown wrote: >>> On 15/12/14 18:41, Don Y wrote: >> >>>> As I said up-thread, let the compiler do the optimizing. Concentrate on >>>> *clearly* writing what you intend. If the compiler is clever enough to >>>> extract some nugget of efficiency from what you've written and >>>> *equivalently* >>>> transform your code into something "better", let *it* do that -- instead of >>>> *you* trying to be cryptically clever in what you *write*. >>> >>> Absolutely. Compilers are better able to optimised from clear code than >>> cryptic code - for example, they can do a better job with array expressions or >>> multiplies than when someone has tried to "help" by using pointers or shifts. >> >> I think a lot of this (programmer) behavior comes from the days of >> naive compilers -- folks got used to "being clever" with their source >> to try to force particular opcodes to be generated. > > In the days with compilers hosted (running) in 64 KiB of core/RAM or > less (with possibly overlay loading from slow floppies :-), you could > not expect much optimization from most of the compilers. Some > generated quite awful code without manual help (such as common > subexpression extraction)..Yes, but it goes beyond "trying to out-think the compiler". Some folks seem to think that coming up with a "tight" way to express something in the *source* (i.e., use the fewest glyphs/newlines/etc.) will magically make the resulting object smaller! E.g., as if: a=b=c=5; is somehow "better" than: c=5; b=c; a=b; (and variations thereon).> However I have seen some Fortran compilers generating so good code > that it was hard to beat with manual assembler, unless you skipped the > Fortran parameter passing mechanism and used global register > allocations etc.Many compilers are in that class. NatSemi's 32K C compiler was remarkably good in the mid *80's* (considering how relatively obscure the product was to have received that much "effort")>> *Now*, the behavior just obfuscates what the programmer is *really* >> trying to "say" (do). > > These days with compilers running on big platforms, much optimization > can be done and there is no point in trying to help the compiler. InMy point is: there is really *never* a point trying to out-think the compiler (if doing so comes at the expense of code clarity). The "big" optimizations come from choices of algorithms, etc. All these other things are "micro-optimizations". The time spent trying to sort out all those little things, debug them *and* their costs to those who "follow" could better be spent napping and waiting for technology to get faster without your effort. The problem we tend to see in COTS devices is that others "spend" those technological improvements in ways that aren't always beneficial to their end user(s). E.g., "glitz" instead of "reliability"/accuracy.> practice the only need for manual assembler is to utilize some special > target machine instructions that can't be expressed in HLL. > > I very much doubt that a compiler could detect a section of C code > doing FFT and generate a special instruction doing FFT, if such > instruction is available. > >>>>> Minimising the scope /and/ lifetimes of variables is always good >>>>> programming practice, and gives the compiler the best chance at finding >>>>> flaws and generating good code. >>>> >>>> +1 IME, "block scope" is too infrequently exploited. >>> >>> Not by me it isn't! I am also a fan of C99's mixing of declarations and >>> statements, so that you don't have to declare variables until you actually need >>> them. >> >> Again, I think this is a legacy behavior. People get used to declaring >> variables at the top of a function -- because you *had* to. > > One thing that I miss about Pascal is that you could declare > procedures within procedures and being able to access the local > variables from the outer procedure from an inner procedure scope. In C > you would have to declare them outside the function or pass a huge > number of parameters. In modern C/C++ using inlining helps somewhat.The same was true of Algol -- "nested functions". I miss nothing about Pascal! :-/
Reply by ●December 16, 20142014-12-16
Hi David, On 12/16/2014 2:58 AM, David Brown wrote:> On 15/12/14 21:15, Don Y wrote: >> On 12/15/2014 12:24 PM, David Brown wrote: >>> On 15/12/14 18:41, Don Y wrote: >> >>>> As I said up-thread, let the compiler do the optimizing. Concentrate on >>>> *clearly* writing what you intend. If the compiler is clever enough to >>>> extract some nugget of efficiency from what you've written and >>>> *equivalently* >>>> transform your code into something "better", let *it* do that -- >>>> instead of >>>> *you* trying to be cryptically clever in what you *write*. >>> >>> Absolutely. Compilers are better able to optimised from clear code than >>> cryptic code - for example, they can do a better job with array >>> expressions or >>> multiplies than when someone has tried to "help" by using pointers or >>> shifts. >> >> I think a lot of this (programmer) behavior comes from the days of >> naive compilers -- folks got used to "being clever" with their source >> to try to force particular opcodes to be generated. >> >> *Now*, the behavior just obfuscates what the programmer is *really* >> trying to "say" (do). > > Yes. I have worked with compilers that needed such "clever help" in > order to produce efficient code. Thankfully, I left such tools behind > for the most part. (And when working with such tools, I always examined > the generated code to see the results.)IME, that was the problem: the "cleverness" didn't typically result in more efficient code. Just "harder to read" or "easier to break". The developer, however, would typically be *so* convinced that his "tricks" were smarter than the compiler's "dumb" (hey, it's a machine, right?) approach that they *must* be more efficient. And, they would be part of his/her *style* instead of applied where needed AND VERIFIED to achieve their intended goals. I.e., if you *really* think this operation needs to be expressed in this weird manner, where's the commentary justifying your action?? (What happens when the compiler is upgraded? Do you go back and remove all that cruft? Or, leave the cost of its presence there when it's not really improving anything??)>>>>> Minimising the scope /and/ lifetimes of variables is always good >>>>> programming practice, and gives the compiler the best chance at finding >>>>> flaws and generating good code. >>>> >>>> +1 IME, "block scope" is too infrequently exploited. >>> >>> Not by me it isn't! I am also a fan of C99's mixing of declarations and >>> statements, so that you don't have to declare variables until you >>> actually need >>> them. >> >> Again, I think this is a legacy behavior. People get used to declaring >> variables at the top of a function -- because you *had* to. > > For some people, that's the case. Others think it is somehow clearer, > or better style, to declare variables at the top of the function. > People have strange ideas about code style!I would *prefer* to be able to find definitions in a fixed place. But, it's only a problem when you're dealing with some lengthy, complicated bit of code and have to go chase down *where* the definition might lie. Solution: strive for simple functions and good "presentation" -- so you can more readily "perceive" where the declarations *should* be (and, surprise!, that's where they are!) At times, it can be frustrating as it can add (some small amount of) typing to your effort -- e.g., when you decide to move the declaration (because a variable must be accessed in a larger scope; or, earlier). Limbo allows declaration and assignment with slightly different syntax: foo: int; foo = 2; vs. foo := 2; Note that the latter is "encouraged" by the syntax -- it's easier to type than the former (I believe you should strive to make better practices the ones that require the least effort from the user... so laziness causes them to be adopted! :> ) I can't tell you the number of times I've had to change the ":=" to '=' and scroll up to insert the declaration a few lines earlier. This gets old *really* quick! It would save a lot of un-typing/re-typing to just put all the declarations in one common place... My "pro bono" day (perhaps last of the year?? :> ) ...
Reply by ●December 16, 20142014-12-16
On 16.12.2014 г. 10:25, upsidedown@downunder.com wrote:> ... > These days with compilers running on big platforms, much optimization > can be done and there is no point in trying to help the compiler. In > practice the only need for manual assembler is to utilize some special > target machine instructions that can't be expressed in HLL.It is perhaps true compilers can be made that good but I have yet to see HLL written code which my VPA (which I control how high it gets while I write, lines with register operations alternate with actions on objects etc.) written code won't beat by at least a factor of 10 when it comes to density. Execution speed likely too but this is harder to compare. It is not just a compiler thing, it is about how the programmer thinks; high level languages simply constrain that assuming that he is too stupid to not make mistakes which are known to be commonly made. I realize I am probably alone on the world left doing that but I can say that once one becomes really good at writing with access to low level - good register model in the head all the time - and all the facilities to go higher (plenty of argument/text processing abilities, partial word extraction, recursive macros, multidimensional text variables etc. etc.) high level languages with their predefined "one fit all" sentences look like tools from the stone age. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
Reply by ●December 16, 20142014-12-16
Hey George! How they hangin'? On 12/15/2014 7:45 PM, George Neuner wrote:> On Fri, 12 Dec 2014 14:48:39 -0700, Don Y <this@is.not.me.com> wrote: >> On 12/12/2014 1:02 PM, Ed Prochak wrote: >> >>> As I began my career in software and systems, choosing a programming >>> language was at times serious. Over the years it seems that choosing a >>> programming language has become: what is popular (or perceived popular by >>> management). >> >> Or, what the diploma mills are churning out! > > Which would be Java and Python.I'll take your word on that. I'm not around youngsters, much.>>> Are you using a language because it is popular? >> >> Not "popular" in the sense of "en vogue" but, typically, "reasonably well >> known" (even if not EXPERTLY known by the audience I have to address). >> A *great* language that is obscure is typically of very little value >> (what happens when your "investment" -- programmer -- moves on to >> another employer?) > > But then how do you make yourself indispensable? 8-)<shrug> Polaroids of the boss engaged in unfortunate acts with livestock?> When time to market is paramount, there can be benefits to using the > "great" language even if it is obscure. If the boss and/or the client > doesn't object, why unduly burden yourself by using inferior > languages? [he says, tongue planted firmly in cheek] > >> Often, there are other concerns that factor into a language choice >> besides "code efficiency", "intuitiveness", "expressiveness", "popularity", >> etc. > > Outside the embedded and RBS (really big science) arenas, efficiency > is hardly even a consideration any more. Witness the proliferation of > bytecode interpreted languages and resource managed execution > environments. If you find yourself making extensive use of C > libraries from your oh!_so_wonderful high level language, you probably > should have been writing in C to begin with.There is some merit to this. E.g., the goal of my "scripting language" is to allow folks who probably would never be able to write a line of code in any "modern" language to implement things with some degree of success *without* a big investment or high anxiety. But, there, efficiency isn't the concern.> OTOH, the majority of programmers today have never seen a malloc() and > wouldn't know what to do with one if they did. For most programmers, > GC is not a nicety but a necessity without which they cannot write a > functioning program. > >> You also have to consider which features of your *environment* need to >> be supported *in* the language (else you end up relying on "libraries" >> to augment the language's capabilities -- often in sub-optimal ways). >> >> E.g., do you need to support concurrency? What sorts of communications >> (a *huge* aspect of a reliable system design) do you support? Does >> your communication mechanism impose type checking on the data that it >> passes? Or, is it an "untyped byte stream" and you rely on the >> targeted process/device/etc. to MANUALLY perform all of that testing >> on the incoming data? >> >> For example, the scripting language (above) can't expect a neophyte to >> be diligent and check for zero denominators. Or, to order operators >> to preserve maximum precision in the result. As a result, the *language* >> has to do this (at run-time or compile-time). I.e., >> >> 12334235234534635645674754675675675675678567/2354234029348293492384 >> >> should yield a *different* result from: >> >> (1+12334235234534635645674754675675675675678567)/2354234029348293492384 >> >> because the neophyte would *expect* it to produce a different result! >> (imagine this is a subexpression in a larger expression) The neophyte >> should be tasked with indicating the level of "detail" (precision) he >> seeks in the output -- not the language (which would require educating the >> neophyte as to where calculations could "blow up", etc.) > > There is near consensus that a _safe_ language, by default, should > provide arbitrary precision, base 10 arithmetic. The majority of > programmers today have had little or no mathematics education and are > unable to identify or fix potential problems caused by fixed precision > and/or range in numerical calculations.Joe Average *User* shouldn't be expected to do so -- hence my reason for adding that support. They're "just trying to get an ANSWER", not write a treatise that serves as a testament to their cleverness. We've discussed this previously -- I suspect (modern) programmers are similarly motivated (by Manglement and training): just push it out the door, we'll fix it when the user's tell us what's wrong! (look at all that FREE TESTING we'll get!)>>> Was it your choice or was it forced by management? > > "Pascal is a voluntarily worn straight jacket. You use it precisely > because it won't let you do certain things. It's a PITA when you're > writing the code, but a godsend later when you're trying to debug it." > -- Marvin Minsky > > Management has a vested interest in not getting stuck with a useless > pile of [code] if you leave, but that has to be weighed against the > value of a developer's time. In many projects, software development > is the major time expenditure and the major monetary expense. Anything > which makes software developers more efficient should be welcome.Unfortunately, software seems NOT to be considered an "investment". I'm not sure if this is because time is not being allotted to create "reusable components"; or, if developers are loath to reuse software (NIH, etc.) and thus "train" management not to waste time on creating reusable components because they WON'T be reused!>>> How many languages do you know? >> >> That's sort of a silly question. Define "know". I.e., one such definition >> might be "able to sit down and write EFFECTIVE/bug-free code NOW". > > By that definition, few people *know* any language. > > I certainly can read/understand more languages than I can sit down and > immediately start to work with (there's only ~ a half dozen there).I can "read" lots of languages (including things like Spanish, Greek, etc. -- despite never having learned any of them). But, could only *guess* as to what they were saying, in many cases. If you don't have enough confidence to be able to spot errors *in* a piece of code, then I wouldn't consider that as "knowing". I.e., something that separates the real code from "pseudo-code" in your mind (so, instead of "it looks like this is trying to...", you are confident saying "this code DOES...")> But generally what prevents me using a new language quickly is not > it's grammar but it's vocabulary: i.e. before I can do anything useful > I have to learn about its "standard library". > > But so far as writing effective, bug free code the first time: that's > a fine ideal, but it really isn't that important. What is important > is that buggy code not escape from the development environment and > that the final code be effective.I guess it depends on the OP's intent behind the question. I see "know" and "familiarity" as two entirely different things. I can PROBABLY sit down and look at a huge number of different code fragments in many languages -- including imaginary ones -- and claim, with some amount of confidence, that "it looks like this is trying to..." (especially if the code was written by someone who "knows" the language so there is some inherently high degree of correctness to it). But, I'd never claim/imply to be able to sit down and "make it work" (in some small number of attempts).> Obviously, the straighter the path, the better ... but the important > thing is the result, not how you got to it. > >>> Which language would you choose for a large pattern matching project? >> >> That would depend on the project. As most of my projects are real-time >> (ignore the SRT/HRT distinction as most folks are misinformed, there) >> and severely resource constrained. "Extra/unused resources" represent >> excess cost. (You can't grow functionality to consume them after >> the fact as that alters the Specification -- potentially compromising the >> entire design. So, they can only be applied ex post factum to improve >> performance... ABOVE that which was Specified). >> >> My speech synthesizer is little more than a pattern matching project. >> But, it's RT and resource constraints dictate a small, tight implementation >> ("Gee, wouldn't this be *so* much easier to code in LISP??") > > Lisp is ok, but Prolog would be a more natural choice. However, > naively written Prolog can be slower than a molasses popsicle and it > can yield unexpected results if you don't cut appropriately. For best > trade off of development time, program size and execution speed, you'd > probably want to use OCaml. > >>> I'll sit back a bit before throwing in my thoughts. >> >> IME, the language isn't as important as a clear definition of the problem. >> I've met lots of "experts" (in particular technologies, languages, etc.) >> whose lack of knowledge of the *application* domain rendered most of their >> knowledge *useless* (in-APPLICABLE). >> >> Some languages force you to clearly define THE IMPLEMENTATION. But, if >> this still has a poor match to the actual *problem*, the the language's >> features/facilities/CONSTRAINTS don't do anything to improve the quality >> of the product ("Our code is 100% bug-free -- machine validated!" "Yeah, >> but it doesn't *do* what we set out to do!") > > The definition of "bug" is operation deviating from specification. > When in doubt, change the specification. 8-)"Well, if we're going to *change* it, why waste time WRITING IT in the first place?? Let's wait until we're done -- and, hell, at that point, we won't NEED it!! :> " Gotta go hide the last batch of cookies before C decides they're "breakfast"...
Reply by ●December 16, 20142014-12-16
On 12/16/2014 02:25 AM, upsidedown@downunder.com wrote:> > These days with compilers running on big platforms, much > optimization can be done and there is no point in trying to help the > compiler. In practice the only need for manual assembler is to > utilize some special target machine instructions that can't be > expressed in HLL. > > I very much doubt that a compiler could detect a section of C code > doing FFT and generate a special instruction doing FFT, if such > instruction is available. >Maybe not a full FFT but they can do remarkable things. The following is some code that takes advantage of the ARM short vector instructions. The vectors a,b and c are 1024 elements of float. The loop is executed 256 times using 4 array elements at a time. I did have to specify in the compiler options the architecture level and that a vector unit was there - but that was a lot easier than writing it by hand using inline assembler or built-ins. The "big platform" was an original Beaglebone Black (about $50) for compilation and execution. 60 .L3: 20:vectest.c **** } 21:vectest.c **** for (i=0;i<1024;i++) { 22:vectest.c **** a[i] = b[i] + c[i]; 61 .loc 1 22 0 is_stmt 1 discriminator 2 62 0070 660F6F84 movdqa 8240(%rsp,%rax), %xmm0 62 04302000 62 00 63 0079 660FFE84 paddd 4144(%rsp,%rax), %xmm0 63 04301000 63 00 64 0082 660F7F44 movdqa %xmm0, 48(%rsp,%rax) 64 0430 65 0088 4883C010 addq $16, %rax 66 008c 483D0010 cmpq $4096, %rax 66 0000 67 0092 75DC jne .L3
Reply by ●December 16, 20142014-12-16
Dennis <dennis@none.none> writes:> Maybe not a full FFT but they can do remarkable things. The following > is some code that takes advantage of the ARM short vector > instructions. ... The "big platform" was an original Beaglebone Black > (about $50) for compilation and execution. ... > 62 0070 660F6F84 movdqa 8240(%rsp,%rax), %xmm0 ... > 63 0079 660FFE84 paddd 4144(%rsp,%rax), %xmm0That looks like x86 code.







