Empty Space
Member
Edit: This was clearly not the place for this. Sorry
AUTHORS.txt bower.json dist external LICENSE.txt package.json README.md src
I've been a programmer for a long time, but I haven't done a large scale web project for a while. I want to investigate using npm for client-side front-end package management. I am not using Node.js for anything related to this project. Basically I just want to use npm to manage versions on the JS libraries the project includes. I have a bunch of dependencies, and I want to keep them up to date. I have never used npm before.
I created a test folder (~/test/), and used npm to install the libraries I want: bootstrap, jquery, d3, js-cookie, crossfilter, clipboard, etc. I have the packages in a package.json file, so I know for future deployments it's no problem to use the package.json to do the download. In ~/test/, I have node_modules/. Under node_modules/, I have a variety of subfolders for each package I've installed.
Let's take, for example, just one dependency: jQuery.
npm install jquery --save
<stuff happens>
cd node_modules/jquery
ls
Code:AUTHORS.txt bower.json dist external LICENSE.txt package.json README.md src
Here's where I get confused. The actual file I want is ~/test/node_modules/jquery/dist/jquery.min.js. But there's an enormous amount of other stuff sitting around. The src folder, the external folder, the other stuff in the dist folder. What I want to do is install just the files I need to specifically use the library in my web project.
For just jquery, it's no problem, I can just have a make file that copies just jquery.min.js into the web project folder. But with a lot of libraries, it's a lot of work. And it seems like for everyone raving about how great npm is for front-end package management that there would be some automated way to deal with this.
I looked into the package.json for each library I was installing. I found that about half of them had a key "browser" which pointed to the file I wanted, and about half had a key "main" which pointed to the file I wanted, and some had neither or had one of them but it pointed to something other than the file I don't want. In addition, a few libraries come with css files and other things I need, and those weren't listed properly in the package.json file. Finally, the package.json file pointed to the non-minified versions. Normally I'd think "Well I just minify them myself", but the packages come with minified versions, so why exactly do I need to rebuild? So it's not like I can just parse the package.json file in each folder and copy whatever I find there.
This blog post suggests maybe actually npm sucks at this use case and lol oh well.
What am I missing here? What's the actual step I should use to deploy the files I got from the downloaded npm packages into my web site's folder? Some links suggest Browserify, but that seems to not quite be right, that seems to be for compiling multiple modules into one package, which is not what I want to do, and it still seems to require me to manually do a lot of heavy lifting to tell it how to move the stuff.
For front-end packages most web devs use bower (https://bower.io/), it's the same idea but all the packages are front-end so I'd recommend you use that instead. The bower.json will look very similar and most decent packages have a "main" that lists the files you should stick in the page. You can use something like https://www.npmjs.com/package/main-bower-files to pick them up for your build script concatenation, minification and what-not.
npm install browserify babelify watchify jquery d3 ... --save
import $ from "jquery";
import bootstrap from "bootstrap";
import d3 from "d3"
import jsCookie from "js-cookie";
$("h1").html("foobar");
"scripts": {
"watch:js": "watchify -v -t babelify path/to/app.js -o path/to/app.bundle.js -d",
},
npm run watch:js
<script src="app.bundle.js"></script>
Can you guys recommend me material to get a better understanding of C++ pointers? Been using java for years and i'm having trouble understanding this.
Currently I have a member of a struct that's a pointer and i'm trying to later set that pointer to something in a method.
struct example {
int *myPointer;
}
example foo;
int bar = 5;
int bar_2 = 6;
myPointer = &bar; // my integer pointer myPointer now points to bar's address.
foo.myPointer =/= bar // will not work it needs bar's address not its value
foo.myPointer = &bar // myPointer now points to bar's address
*foo.myPointer // now returns the value stored in the pointer's address.
*foo.MyPointer = 900;
bar will now return 900.
Guys I am thinking of enrolling into Xamarin University
https://www.xamarin.com/university
just to give you a little background, I did computer engineering and right now I work as a Application developer. my job isn't super technical when it comes to programming. And that's what I am most afraid of. I want to advance in better job, but everything requires so much programming knowledge which I just don't have. I did Java and little bit of C in university but that's about it.
So my goal is to learn mobile app development not just for better job in future but may be to start my own start up.
now considering all this. Do you think Xamarin University is a good way to start. The fee is about $2k for a year.
Personally, before starting a paid path, I'd try some free ones. How about the mobile app development courses on Udacity? You can do all of those for free instead of doing paid versions of them on the site. The android one is backed by Google.
Stump, try a module bundler like http://browserify.org/ (or Webpack or Rollup). There's some learning curve, but after a short while you can stop worrying about your deps and instead just do stuff. [detailed example]
So, it's a fairly large web app with about a dozen dependencies (largest: jquery, bootstrap, d3, crossfilter, dc, topojson) + a bunch of modules written just for the project.
My instinct would be that bundling is not especially useful for the project because the costs saved (fewer requests to load the JS modules) aren't really equal to the costs paid (larger download--especially for pages that don't use most of the modules). I dunno want the ideal caching tradeoff is there. Do you know of any sources that discuss this problem?
Right now, our build process is that every template has some metadata for which JS modules it asks for. Those JS modules are then injected into the HTML as needed. So we keep everything stored as separate minified js files. We have the versions we're using included in our source control, so production deployments download them all from source control.
Am I really going to get a big performance benefit by setting up this system and switching to a single bundled js module?
So, it's a fairly large web app with about a dozen dependencies (largest: jquery, bootstrap, d3, crossfilter, dc, topojson) + a bunch of modules written just for the project.
My instinct would be that bundling is not especially useful for the project because the costs saved (fewer requests to load the JS modules) aren't really equal to the costs paid (larger download--especially for pages that don't use most of the modules). I dunno want the ideal caching tradeoff is there. Do you know of any sources that discuss this problem?
Right now, our build process is that every template has some metadata for which JS modules it asks for. Those JS modules are then injected into the HTML as needed. So we keep everything stored as separate minified js files. We have the versions we're using included in our source control, so production deployments download them all from source control.
Am I really going to get a big performance benefit by setting up this system and switching to a single bundled js module?
I generally don't advocate for them because they tend to add complexity and not really buy you a whole lot, you typically only need a little bit of manual work on the build and you'll produce better performing code.
Man, my job is kinda pushing me towards learning javascript and some web dev but that shit is crazy as hell. I found a ''front-end handbook'' and it was like 130 pages with things you apparently need to know these days, from builders to transpilers and post-css processors and god knows what else.
Like I said in my previous post, you still just use the spoon. You can write all your code in a single file and you can copy and paste third party dependencies from the internet. You can write just plain CSS and spend hours on making sure that every vendor prefix and browser quirk is covered. You can write just plain old EcmaScript 5 code and you can just link .html pages to .html pages. Just know that you'll be responsible for that spaghetti forever.
Browsers is a HUGE ASS PLATFORM. There's like A BILLION OF THEM. And each of every one of them have billions of quirks. Web development is hard, not because of the tools, but because of the targeted platform. If you could press a reset switch and just make everyone use the most modern browser always, there wouldn't be any problems. But you really, really can't. Which is why the tools exist: to make it easier for those that spend day in and day out building complex projects for complex clients or those that just want to create the best code possible in the shortest amount of time. They are a lifesaver, they really are.
Do they add complexity? It depends. I outlined all the steps needed for a basic Browserified build, it's barely a few commands. Manual work sounds like terrible waste of time, when others have solved the problem tens time before, tens times better than you can. Better performing code has nothing to do with modules in general.
If you need to screw one screw, using the other end of the spoon is okay. If you have to screw thousands of screws (like Stump said himself, a fairly complex project), leaning to use actual tools made for it will save you tons of pain and time in the end, even if there's some investment in the beginning. Bootstrap is built on modules, d3 is built on modules, topojson is on built on modules...
Not only that, when you pick up the best practices, when your project ends up in someone elses hands they don't need to worry about solving the puzzle with your Own Better BuildProcess^TM that only you know the details behind (we have all been there, right?).
Not only that, but third party dependencies in source control is a great way to introduce bit rot and ensuring that keeping them up to date is hard as possible. Not to mention endless lines of codes in your source control history that aren't related to your own code. NPM (and every other package manager out there) get tons of undeserved hate, mostly because people don't understand that it's actually pretty friggin' hard to make package managers.
The most common reasoning of having dependencies in source control is the fear of the said dependencies disappearing. When you say that it's most likely not going to happen, the "left-pad" incident gets often quoted as the prime example. When you actually think more than just "hurr durr 13 lines broke builds", the following happened:
1. Builds and downloads were broken... for 7 minutes
2. All existing code worked just fine
3. NPM ensured that the "left-pad" type of case won't ever happen again.
I have used NPM modules and npm modules only for (5?) years now and I have never witnessed a dependency just disappearing. I am not very worried about the future either; if NPM is down in 15 years in the future, I really have to think hard about situations where I a) couldn't find the said dependencies anywhere b) wanted to use that code anyway.
Write modules, stop worrying, enjoy life and best of all enjoy web development, because after you get over the sea of trolls and grumpy old men, it's actually super fun. If it wasn't I most likely wouldn't do it as my day job.
What situations would you need to mess with variable's memory addresses?A pointer is a variable that has the value of a memory address of another variable.
In laymen terms: You have a variable that points to a section of memory that contains some data. Example using structs
Code:struct example { int *myPointer; }
so lets say you have an object of example called foo. Foo holds a pointer of integer type (which means it can point to memory that is an integer)
so lets say we have our object of the struct example and a few integers.
Code:example foo; int bar = 5; int bar_2 = 6;
we want foo's integer pointer (myPointer) to point to bar's address.
bar right now returns the value 5 not it's address and myPointer needs an address to point to so we cant do myPointer = bar. We need bar's address.
To do this we use the "reference" symbol denoted by an ampersand &.
&bar returns the address of bar.
so if we do
Code:myPointer = &bar; // my integer pointer myPointer now points to bar's address.
If we return myPointer right now it'll return an address in memory. If we want to know the value inside of the block of memory we have to "dereference" the pointer which is denoted by the asterisk *.
so *myPointer will return bar's value 5.
Quick rundown of everything.
Code:foo.myPointer =/= bar // will not work it needs bar's address not its value foo.myPointer = &bar // myPointer now points to bar's address *foo.myPointer // now returns the value stored in the pointer's address.
Now if you dereference myPointer and change the value bar's value will also change because its changing the value in memory.
Code:*foo.MyPointer = 900; bar will now return 900.
Overall this is a pretty good tutorial of pointers:
http://www.cplusplus.com/doc/tutorial/pointers/
You can pm me if you have any specific questions or if something is confusing. Tried to give you a quick easy rundown to get you started.
You also need to make sure you delete pointers/memory when you are no longer using them. This is beyond the scope of what I typed.
If you're learning c++ I highly recommend reading Scott Meyer's Effective C++ book.
So my dad just sent me a link to this coding package. Is this a good deal? I started coding in college, but dropped it because I wasn't in a good place at that time. Are these languages useful or should I look elsewhere?
https://store.idropnews.com/sales/e...AppleMac_B_SL_Sale_Giveaways&utm_medium=email
Programming language resources are free. Apple even wrote their own book for Swift.So my dad just sent me a link to this coding package. Is this a good deal? I started coding in college, but dropped it because I wasn't in a good place at that time. Are these languages useful or should I look elsewhere?
https://store.idropnews.com/sales/e...AppleMac_B_SL_Sale_Giveaways&utm_medium=email
I have a quick algorithmic question...
I work with boolean matrices with p lines and q columns.
For efficiency purpose, I store those by (p*q) bits integers, row major.
So
001
110
100
is stored as 136 (001110100b)
All the operations I do are performed easily with ints (such as xoring matrices) or shifts, ands (extract int representing liines) and luts (to count the number of 1)
Except (mostly) one: extracting a column (I want 3 (011b) for first column, 2 for second, etc.)
I mean, I know how to do it, but given the collection of bit tricks I've seen, I look for a efficient (and non trivial) way to do it.
In other words, if an integer a is in binary
a_(n-1) ... a_2 a_1 a_0
I want the integer which is, in binary,
a_k+(p-1)q ... a_k+2q a_k+q
Any clever idea?
Yes."I want the integer which is, in binary,
a_k+(p-1)q ... a_k+2q a_k+q"
so here p and q are the size of the matrix?
Well, I shouldn't have put + I meant the concatenation of bits.and the add operation no longer binary?
#include <stdio.h>
#include <iostream>
using namespace std;
int compareStr(const void *val1, const void *val2)
{
const char *v1, *v2;
v1 = [B]*(char **)[/B]val1;
v2 = [B]*(char **)[/B]val2;
return strcmp(v1, v2);
}
void main()
{
// Sorting an array of strings
char *arrStr[] = {"hij", "klm", "abc", "opq", "defg"};
length = sizeof(arrStr) / sizeof(arrStr[0]);
cout << "arrStr before sorting:" << endl;
for (i=0 ; i<length ; i++)
cout << arrStr[i] << ", ";
cout << endl;
qsort(arrStr, length, sizeof(arrStr[0]), compareStr);
cout << "arrStr after sorting:" << endl;
for (i=0 ; i<length ; i++)
cout << arrStr[i] << ", ";
cout << endl;
}
Well, each element of arrStr is a pointer to an address storing a chain.
So arrStr is a char*
but qsort will provide compareStr the address of two elements of the table the function compareStr has to compare, such as
&(arrStr) and &(arrStr[j])
Those elements are pointers to an adress that contain a pointer to an address storing a chain.
So arguments of compareStr will be (char**)
But, for polymorphism purpose, those elements are casts into void*
So
(char**) val1
is just used to recast those arguments into (char**)
But you have &(arrStr) and &(arrStr[j]), and what you want to compare are arrStr and arrStr[j]
The leading * is used to get the element pointed by val1 and val2 ("remove the &")...
I hope it's understandable (and correct...!)
Yes... The kind of polymorphism-hack you'll find in C.So basically... qsort doesn't care about what we're comparing whether it's int, char, etc.
It casts the address of them into void* and passes those to the callback, but yes.Qsort casts the array elements into type void and passes the addresses of them into the callback
That's it. Though it can return any positive or negative value instead of 1 and -1 (so that you could return (*(int*)v2) - (*(int*)v1) to compare integers, for example)and trusts that the callback does it's job as far as comparisons between the two values and returns 0, -1, or 1.
Yes... The kind of polymorphism-hack you'll find in C.
It casts the address of them into void* and passes those to the callback, but yes.
That's it. Though it can return any positive or negative value instead of 1 and -1 (so that you could return (*(int*)v2) - (*(int*)v1) to compare integers, for example)
Maybe try the web dev OT: http://www.neogaf.com/forum/showthread.php?t=756776&page=38Anybody here good with knockoutjs?
I'm stuck trying to do validation on my observables/properties, and it's really holding me up
for (int i = 0; i < L; i++) {
double isq = i*i; // bad practice?
vector<double> vec1 { f(i), g(i), ... }; // very convenient, but maybe bad practice? new alloc each step ...
// probably pushing it
vector<double> vec2;
for (int j = -n; j <= n; j++)
{
vec2.push_back(h(j));
}
...
}
invTranSamp ps x = geqS x ls
where ls = cumDis ps
I'm not familiar enough with C++ to say this for certain in this case, but generally reallocating/changing the size of vectors is really bad for performance, it's horrible. This might not matter for small applications, but it's a really bad habit.Is there a consensus on where to do variable declarations in C/C++? I've changed from declaring everything up-front to doing it wherever I first use the variable in my code. The handy `auto` keyword in C++ has been encouraging me a bit, although I'm definitely overusing it right now due to its freshness.
I see the advantage of this as having the type definition right next to when the variable is first needed and that you immediately get to the code instead of having to first parse a variable list. A disadvantage is that up-front declarations can help introduce the structure of the program and which variables are important to keep track of. I feel like this would be important mostly in larger functions.
A related question is whether it's good practice to introduce scoped variables inside loops if they're anyway calculated or reset every step. Right now I'm playing around with some vectors and let the destructor take care of them once they exit the scope, then create a new one.
Code:for (int i = 0; i < L; i++) { double isq = i*i; // bad practice? vector<double> vec1 { f(i), g(i), ... }; // very convenient, but maybe bad practice? new alloc each step ... // probably pushing it vector<double> vec2; for (int j = -n; j <= n; j++) { vec2.push_back(h(j)); } ... }
I'm still just an amateur but I'd prefer to not pick up bad practices if I can avoid to.
Hmm, yeah even if I was thinking mostly of small vectors it's probably best to never begin taking a shortcut like that.I'm not familiar enough with C++ to say this for certain in this case, but generally reallocating/changing the size of vectors is really bad for performance, it's horrible. This might not matter for small applications, but it's a really bad habit.
Hmm, yeah even if I was thinking mostly of small vectors it's probably best to never begin taking a shortcut like that.
Hmm, yeah even if I was thinking mostly of small vectors it's probably best to never begin taking a shortcut like that.
This all sounds good to me although I'll stop experimenting with scoping larger objects like this. Speaking of `const`, one detail I like in Rust is that variables are const by default and have to be explicitly made mutable.Ucchedavāda;217407647 said:Also, to answer your previous question: For types that are cheap to construct, I find it preferable to place the declaration in the smallest possible scope where that variable is required. Doing so naturally limits the scope in which you have to think about that variable, which (IMO) makes code easier to read*.
For types that do heap allocations, such as vectors, or are otherwise expensive to construct, it can be preferable to move these outside of loops so that the resources can be reused each loop. But generally this is only something you need to worry about in performance critical parts of your code (profile before optimizing).
Point taken about the cost. While biking home I imagined some future where I had made a bad assumption about the cost of a "small" operation and finding out it's a bottleneck only after a lot of head scratching.Small allocations are still pretty expensive. It looks like the the vector vec1 has a statically known size (known at compile-time). In that case, you can use a std::array (which is a stack-allocated array with size known at compile-time), which has basically zero overhead.
^^ that is also very good advice.
Your fear is correct. Haskell doesn't memoize, it only thunks.Any Haskell experts around here?
I have a doubt about how Haskell Works. I have the following function:
Code:invTranSamp ps x = geqS x ls where ls = cumDis ps
As it names suggest, is an inverse transformation sampling implementation (on discrete distributions) and 'cumDis' is the cumulative distribution of the distribution defined by 'ps' and 'geqS' finds (by means of a binary search) the infimum corresponding value in cumDis.
My question is: Calling cumDis is an O(n log n) operation. Are the results of calling 'cumDis ps' stored in memory and calling 'invTranSamp' is of order O(n log n) once and O(log n) afterwards, or is it always of O?
The second would be pretty bad. I will be calling it millions of times and 'n' is in the hundred of thousands :S .