Forum Replies Created
-
AuthorPosts
-
codingadvocateParticipant
If you look at the database level, you’ll see you have a few entries on the default realm (even if empty) that need to exist for any new realm. You can create those in the Admin Console, on the DB level, or via the API. But those are: OS Parameters, Config Groups, and Default Config.
And the reason you’re only hitting an error on the shell jobs, is because those are the ones that use those configs. You can see this in the job parameters too, with “loadConfigGroups” being set to true. Happy discovery! 😉
codingadvocateParticipantAre you saving the copied jobs with different names? Jobs need unique names within the same package (e.g. within findEndpoints).
codingadvocateParticipantHey G25,
You may have figured this out by now, but I just remembered this convo and may have the answer. You said you followed the video guide (walk through on modeling), but you didn’t mention what your “Discoverable Object” was. I think that’s the issue. I’ll explain:
The current jobs (build_application_via_processes and build_application_via_software) are configured to use a specific set of definitions. Take a look at (and compare/contrast) the job parameters. They call the same script to do the work, but the “targetQuery” and the “metaDataQuery” are specific to that job. One works of “ProcessFingerprint” and the other off of “SoftwareFingerprint”. So, if your “Discoverable Object” is a different class… you just need to create a new job just like the current two, with different parameters for targetQuery and metaDataQuery. And of course you need to create those two input queries.
Cheers.
codingadvocateParticipantHey G25,
Unfortunately, nothing stands out. I’d have to jump into the details with you to figure out where the problem is.
codingadvocateParticipantCertainly. I’d be happy to help with hours (or days or whatever you’re envisioning) of paid engagements – like training. Go to the Consulting page on the website, click on CMS Construct and use the Contact Us form. They take care of the paperwork. Tell them that you specifically want codingadvocate to help.
And for what it’s worth, your blank page on the Model View (assuming you built the model meta-data correctly) is likely due to not having the job run… in order to update the model. 😉 Check to see if the jobs are active; they are under the UniversalJob service – so ensure you have one of those OCP clients running.
codingadvocateParticipantI’m glad it’s working for you now.
How can you thank? You’ve freely received… so freely give. Message it out via LinkedIn or other social platforms.
You can recommend the listing on SourceForge: https://sourceforge.net/software/product/Open-Content-Platform/
Or get involved and become part of the coding community. 🙂
codingadvocateParticipantYes, the software discovery in this tool is far more powerful than anything available with big-vendor proprietary ITSM discovery products. Not to mention it’s agent-less and doesn’t require a software database.
No, there isn’t a bug on the job; in fact it has nothing to do with the job. It’s on the protocol level. I called it a bug (actually a regression) because the functionality worked via global setting many versions ago, before it was switched over to local job parameters. The protocols are intended to be wrapped/overridden by customers, but I’ll leave that topic alone for now.
As I mentioned, it’s a small change. So I went ahead and did it for you, here: https://github.com/opencontentplatform/ocp/tree/protocol-connections-via-named-endpoints I didn’t get to test, but figured you can do that. To apply the patch, stop openContentPlatform, update the files in the last uploaded changeset (lib/protocolWrapper and database/schema/softwareElement)… in both your server and client dirs. And you’ll need to manually add the ‘endpoint’ column on the data/protocol table (a 256 VARCHAR that is nullable), unless you want to reinitialize the DB (which would blow away your current protocols). Restart openContentPlatform, and try again.
Whether you look into consulting services is up to you; I was simply responding to your previous request on documentation.
To my knowledge, the only free public-facing documentation is this website. You’re welcome to create some that you think would be helpful, and submit them as a PR.
codingadvocateParticipantMost the documentation is provided through consulting services. I’d refer you to the consultation link of this website to pursue that route.
Regarding downstream usage of endpoint as FQDN:
I just took a look. It appears this is a bug from the previous migration, with going from using a global setting (OS Parameters config), down to a local job level. The endpoint (instead of just assuming IP) needs to be added to the protocol instance for this to continue to be used in downstream jobs, after the initial Find job.I find it odd that other companies haven’t hit this… they must not be using FQDN for PowerShell. Anyway, it’s not a hard fix, but will require a change and a test cycle.
codingadvocateParticipantThanks G25. I’m happy to help.
codingadvocateParticipantThere were several questions/directions in that first post. I’ll try to get to most of them here.
If you want a job to use FQDN (from either the connected Node or via DNS resolution), then put a “connectViaFQDN” parameter into the parameters list on the job, and set it to true. Like this, for the find_PowerShell job:
“inputParameters” : {
“connectViaFQDN”: true,
“portTestBeforeConnectionAttempt” : true,
“portsToTest” : [5985, 5986],
“printDebug” : false
},FWIW, this used to be controlled by the OS Parameters config, but it was necessary to make it job configurable.
With that parameter set to true, the protocolWrapper.py script calls the getFQDN function, during the protocol connection attempt.
I don’t understand the request for powershell_app_components_base. It already passes the previously discovered software through. And after you run the discovery, you can do an API query (or setup an integration) to do the same… to pull the node-to-software content. If you’re just looking for a report, then run the “report_node_details” job from the universalJob service. It will create an Excel workbook with sample default discovered results for you… including software.
You can have non-IP based endpoint query… you just have to make sure other things are setup/aligned to allow it to pass through. This is more detail which I’ll hold off on typing, since I believe the above content gets you on the road again.
codingadvocateParticipantFWIW, another way to seed IPs, if they aren’t responding to ICMP (on purpose/security measures)… is to use the API. It’s pretty common to have IP Address Management products, Helpdesk products, or provisioning tools – perform this operation. That way it’s tied more into the ITSM flow and corresponding lifecycles.
codingadvocateParticipantDid you insert all those ranges into the Platform-Boundary-Networks section? Do you see those registered in the pane to the right? And if you select Platform-Boundary-Realm, do you see all the rolled up networks with the IP count including them? If so, the networks were entered and now you just need to seed the IP addresses. That means running one of the ping (ICMP) jobs via the findEndpoints package in the contentGathering service. If the IP is pingable, it will get created.
And if you go to Jobs-Modify-Toggle in the admin console, and select a job like find_PowerShell_test… notice the number of endpoints, and the specific listing to the right. Those are the endpoints that will run the job. If you don’t see your IPs in there (e.g. 192.168.4.4), it’s very likely because the IP isn’t in the database.
And you can see all IPs (or other objects in the DB) from the Data-Content-Objects section in the admin console.
codingadvocateParticipantOh, and yes… ocp/data/SoftwarePackage is one way to pull it out. That would show a bunch of objects like this:
{
“time_created”: “2021-04-01 15:03:24”,
“time_updated”: “2021-04-01 15:03:24”,
“time_gathered”: “2021-04-01 15:03:20”,
“object_created_by”: “powershell_OS_software_packages”,
“object_updated_by”: “powershell_OS_software_packages”,
“description”: “Caution. Removing this product might prevent some applications from running.”,
“caption”: “Microsoft Visual C++ 2017 x86 Debug Runtime – 14.14.26405”,
“name”: “Microsoft Visual C++ 2017 x86 Debug Runtime – 14.14.26405”,
“version”: “14.14.26405”,
“associated_date”: “20180626”,
“recorded_by”: “Windows Product from WMI”,
“company”: “Microsoft Corporation”,
“vendor”: “Microsoft Corporation”
}codingadvocateParticipantYep, that job uses the package managers as I mentioned above. On Windows, it’s pulling from from Win32_Products, along with the two main registry locations for add/remove programs.
If you want it to run, either use the API or the Admin Console GUI to change the job schedule (like you just showed), and then enable the job. So, make sure to change “isDisabled” to false.
I’m not sure what version of OCP you’re using, but if it’s old enough you may be able to just change the file on the server filesystem. You may have to restart OCP to pick up the file change (again, depending on the version). Newer versions use the job settings (schedule, parameters, etc) from the database table, so changing the file won’t matter after install… unless you update the package after changing the file.
But yes, all in all – change the schedule, enable the job… and it should gather that data.
codingadvocateParticipantLooks like you’re running the dynamicDiscovery jobs. Those get information from RUNNING software, not INSTALLED software. There’s actually a VERY BIG difference between those two categories, and how you gather data for Software Asset Management (SAM). The dynamicDiscovery jobs start with active processes, in order to gather data about the software.
If you want installed software, you can create a job to go after content in the respective package managers. It’s pretty straightforward to run a command to list all packages, e.g. on Linux through yum, rpm, debian, etc. Or similar on Windows with MSI packages from Win32_Products, and the two main registry locations for add/remove programs.
Sometimes that’s useful, but it depends on the use-case and why you want the data. For example, the package managers will not provide what you need for a ITSM SAM solution. You won’t be able to find software that is provisioned to servers, outside of that manager… like enabling software through auto-provisioning tools, using a custom installer, something without an installer, etc. All of which are very common.
So you have two options:
1) Use a tool that has a large software fingerprint type repository, to be able to normalize the libraries/files/etc found. All the proprietary vendors listed on the main opencontentplatform.com page, do it this way. That methodology requires the vendor’s repository to already know about the software, before you can identify and track it… which leaves a lot of software out.2) Use a tool that is able to dynamically discover signature data, without creating and maintaining a bunch of manual software signatures. The dynamicDiscovery jobs do it that way. And FWIW, this is the only tool I’ve seen that does it this way. Since it does it this way, any new software is automatically picked up and tracked (even when it’s internally developed software that a discovery product would never have seen before). However, it only profiles RUNNING software – not all the other binaries sitting on the disk.
-
AuthorPosts